Planet Apache

Syndicate content
Updated: 7 hours 22 min ago

Bryan Pendleton: A day at ARK 2000

Sun, 2014-10-19 12:15

We had the opportunity to spend a glorious day at ARK 2000, which is one of the facilities of a rather unusual organization called the Performing Animal Welfare Society.

Through the generosity of friends, we found ourselves with a pair of tickets to one of PAWS's annual fund-raisers, the "Elephant Grape Stomp." This event is sort of an open house to visit the sanctuary, which is located in the Sierra Nevada foothills, about 2 hours from our house.

During the event, we were able to visit three parts of the sanctuary:

  • The cats and bears area, which holds Siberian Tigers, African Lions, and American Black Bears, as well as at least one leopard (who was feeling unsocial so we didn't see her).
  • Bull Mountain, where PAWS has a facility for two male Asian Elephants (held separately, but adjacently)
  • The Asian and African Elephant compound, where about 10 female elephants are living in two separate areas.

At all three locations, booths were set up with information, local restaurants were serving delicious food, and local wineries (from the thriving Murphy's wine region) were pouring scrumptious Sierra Nevada wines.

Visiting ARK 2000 is sort of an unusual experience.

It is not a zoo, and the animals are not there to entertain you.

And it's not a breeding facility; they aren't trying to produce more of these animals here.

I would say it's more like a senior citizen facility for animals who have been taken from rather difficult circumstances and given a dramatically more humane situation in which to live out their lives.

Still, it was very nice and peaceful. The weather was superb, and we had all the time we wanted to stand quietly and watch the animals as they relaxed, contentedly, in their space.

Several of the staff were on hand, including the primary elephant keeper and the primary bear keeper, to answer questions and help explain what we were seeing and why.

And some of the sights were indeed unusual, such as the three custom transport containers that they use to move the elephants long distances (most recently used to bring three elephants from Toronto to California). This is not the sort of item you can get at your local hardware store!

For example, keeping bull elephants is rather different than keeping female elephants. The extraordinary strength and aggressive tendencies of the bull elephants means that they must be located in a particular situation, with a pen of fantastic strength. In some of the pictures, you can see, I think, the difference in the containment fences for the male elephant as opposed to those for the females. (Of course, the females are plenty strong enough; apparently they like to uproot the oak trees just for fun, and so the facility has built massive protective cages around some of the trees to try to keep the ladies from clearing them out entirely.)

I think the highlight, for me, were the 4 Siberian Tigers, absolutely majestic animals, who were all together in one pen and were particularly active, bounding around their space, playing together, alertly aware of everything and everyone that was around them.

There's lots of information about PAWS on their website. It's not obvious what is going to come of the organization now that its founder, Pat Derby, has passed on. Still, from all evidence they are still going strong, and hopefully they will find a new generation to continue their excellent work.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-17

Fri, 2014-10-17 18:58
Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part IV: Load Testing and Monitoring

Fri, 2014-10-17 13:02

Welcome to the final article in a series on my experience developing services with Apache Camel. I learned how to implement CXF endpoints using its Java DSL, made sure everything worked with its testing framework and integrated Spring Boot for external configuration. For previous articles, please see the following:

This article focuses on load testing and tools for monitoring application performance. In late July, I was asked to look into load testing the new Camel-based services I'd developed. My client's reason was simple: to make sure the new services were as fast as the old ones (powered by IBM Message Broker). I sent an email to the Camel users mailing list asking for advice on load testing.

I'm getting ready to put a Camel / CXF / Spring Boot application into production. Before I do, I want to load test and verify it has the same throughput as a the IBM Message Broker system it's replacing. Apparently, the old system can only do 6 concurrent connections because of remote database connectivity issues.

I'd like to write some tests that make simultaneous requests, with different data. Ideally, I could write them to point at the old system and find out when it falls over. Then I could point them at the new system and tune it accordingly. If I need to throttle because of remote connectivity issues, I'd like to know before we go to production. Does JMeter or any Camel-related testing tools allow for this?

In reply, I received suggestions for Apache's ab tool and Gatling. I'd heard of Gatling before, and decided to try it.

Gatling

I don't remember where I first heard of Gatling, but I knew it had a Scala DSL and used Akka under the covers. I generated a new project using a Maven archetype and went to work developing my first test. My approach involved three steps:

  1. Write tests to run against current system. Find the number of concurrent requests that make it fall over.
  2. Run tests against new system and tune accordingly.
  3. Throttle requests if there are remote connectivity issues with 3rd parties. If I needed to throttle requests, I was planning to use Camel's Throttler.

To develop the first test, I started with Gatling's Recorder. I set it to listen on port 8000, changed my DrugServiceITest to use the same port and ran the integration test. This was a great way to get started because it recorded my requests as XML files, and used clean and concise code.

I ended up creating a parent class for all simulations and named it AbstractSimulation. This was handy because it allowed me to pass in parameters for all the values I wanted to change.

import io.gatling.core.scenario.Simulation import io.gatling.http.Predef._ /** * Base Simulation class that allows passing in parameters. */ class AbstractSimulation extends Simulation { val host = System.getProperty("host", "localhost:8080") val serviceType = System.getProperty("service", "modern") val nbUsers = Integer.getInteger("users", 10).toInt val rampRate = java.lang.Long.getLong("ramp", 30L).toLong val httpProtocol = http .baseURL("http://" + host) .acceptHeader("text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8") .doNotTrackHeader("1") .acceptLanguageHeader("en-US,en;q=0.5") .acceptEncodingHeader("gzip, deflate") .userAgentHeader("Gatling 2.0") val headers = Map( """Cache-Control""" -> """no-cache""", """Content-Type""" -> """application/soap+xml; charset=UTF-8""", """Pragma""" -> """no-cache""") }

The DrugServiceSimulation.scala class posts a SOAP request over HTTP.

import io.gatling.core.Predef._ import io.gatling.http.Predef._ import scala.concurrent.duration._ class DrugServiceSimulation extends AbstractSimulation { val service = if ("modern".equals(serviceType)) "/api/drugs" else "/axis2/services/DrugService" val scn = scenario("Drug Service :: findGpiByNdc") .exec(http(host) .post(service) .headers(headers) .body(RawFileBody("DrugServiceSimulation_request.xml"))) setUp(scn.inject(ramp(nbUsers users) over (rampRate seconds))).protocols(httpProtocol) }

To run tests against the legacy drug service with 100 users over 60 seconds, I used the following command:

mvn test -Dhost=legacy.server:7802 -Dservice=legacy -Dusers=100 -Dramp=60

The service property's default is "modern" and determines the service's URL. To run against the local drug service with 100 users over 30 seconds, I could rely on more defaults.

mvn test -Dusers=100

The name of the simulation to run is configured in pom.xml:

<plugin> <groupId>io.gatling</groupId> <artifactId>gatling-maven-plugin</artifactId> <version>${gatling.version}</version> <configuration> <simulationsFolder>src/test/scala</simulationsFolder> <simulationClass>com.company.app.${service.name}Simulation</simulationClass> </configuration> <executions> <execution> <phase>test</phase> <goals> <goal>execute</goal> </goals> </execution> </executions> </plugin>

When the simulations were done running, the console displayed a link to some pretty snazzy HTML reports. I ran simulations until things started falling over on the legacy server. That happened at around 400 requests per second (rps). When I ran them against a local instance on my fully-loaded 2013 MacBook Pro, errors started flying at 4000/rps while 3000/rps performed just fine.

Jenkins

I configured simulations to run in Jenkins with the Gatling Plugin. It's a neat plugin that allows you to record and compare results over time. After initial setup, I found I didn't use it much. Instead, I created a Google Doc with my findings and created screenshots of results so my client had it in an easy-to-read format.

Data Feeders

I knew the results of the simulations were likely skewed, since the same request was used for all users. I researched how to make dynamic requests with Gatling and found Feeders. Using a JDBC Feader I was able make all the requests contain unique data for each user.

I added a feeder to DrugServiceSimulation, added it to the scenario and changed to use an ELFileBody so the feeder would substitute a ${NDC} variable in the XML file.

val feeder = jdbcFeeder("jdbc:db2://server:50002/database", "username", "password", "SELECT NDC FROM GENERICS") val scn = scenario("Drug Service") .feed(feeder) .exec(http(host) .post(service) .headers(headers) .body(ELFileBody("DrugServiceSimulation_request.xml")))

I deployed the new services to a test server and ran simulations with 100 and 1000 users.

100 users over 30 seconds
Neither service had any failures with 100 users. The max response time for the legacy service was 389 ms, while the new service was 172 ms. The mean response time was lower for the legacy services: 89 ms vs. 96 ms.
1000 users over 60 seconds
When simulating 1000 users against the legacy services, 50% of the requests failed and the average response time was over 40 seconds. Against the new services, all requests succeeded and the mean response time was 100ms.

I was pumped to see the new services didn't need any additional performance enhancements. These results were enough to convince my client that Apache Camel was going to be a performant replacement for IBM Message Broker.

I wrote more simulations for another service I developed. In doing so, I discovered I missed implementing a couple custom routes for some clients. The dynamic feeders made me stumble onto this because they executed simulations for all clients. After developing the routes, the dynamic data helped me uncover a few more bugs. Using real data to load test with was very helpful in figuring out the edge-cases our routes needed to handle.

Next, I started configuring logging for our new Camel services.

Logging with Log4j2

Log4j 2.0 had just been released and my experience integrating it in AppFuse motivated me to use it for this project. I configured Spring to use Log4j 2.0 by specifying the following dependencies. Note: Spring Boot 1.2+ has support for Log4j2.

<log4j.version>2.0</log4j.version> ... <!-- logging --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-api</artifactId> <version>1.7.7</version> </dependency> <!-- Necessary to configure Spring logging with log4j2.xml --> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-jcl</artifactId> <version>${log4j.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-slf4j-impl</artifactId> <version>${log4j.version}</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-web</artifactId> <version>${log4j.version}</version> </dependency>

I created a src/main/resources/log4j2.xml file and configured a general log, as well as one for each route. I configured each route to use "log:com.company.app.route.input" and "log:com.company.app.route.output" instead of "log:input" and "log:output". This allowed the log-file-per-route configuration you see below.

<?xml version="1.0" encoding="UTF-8"?> <Configuration> <Properties> <Property name="fileLogDir">/var/log/app-name</Property> <Property name="fileLogPattern">%d %p %c: %m%n</Property> <Property name="fileLogTriggerSize">1 MB</Property> <Property name="fileLogRolloverMax">10</Property> </Properties> <Appenders> <Console name="Console" target="SYSTEM_OUT"> <PatternLayout pattern="%d [%-15.15t] %-5p %-30.30c{1} %m%n"/> </Console> <RollingFile name="File" fileName="${fileLogDir}/all.log" filePattern="${fileLogDir}/all-%d{yyyy-MM-dd}-%i.log"> <PatternLayout pattern="${fileLogPattern}"/> <Policies> <SizeBasedTriggeringPolicy size="${fileLogTriggerSize}"/> </Policies> <DefaultRolloverStrategy max="${fileLogRolloverMax}"/> </RollingFile> <RollingFile name="DrugServiceFile" fileName="${fileLogDir}/drug-service.log" filePattern="${fileLogDir}/drug-service-%d{yyyy-MM-dd}-%i.log"> <PatternLayout pattern="${fileLogPattern}"/> <Policies> <SizeBasedTriggeringPolicy size="${fileLogTriggerSize}"/> </Policies> <DefaultRolloverStrategy max="${fileLogRolloverMax}"/> </RollingFile> <!-- Add a RollingFile for each route --> </Appenders> <Loggers> <Logger name="org.apache.camel" level="info"/> <Logger name="org.springframework" level="error"/> <Logger name="com.company.app" level="info"/> <Root level="error"> <AppenderRef ref="Console"/> <AppenderRef ref="File"/> </Root> <Logger name="com.company.app.drugs" level="debug"> <AppenderRef ref="DrugServiceFile"/> </Logger> <!-- Add a Logger for each route --> </Loggers> </Configuration>

I did run into some issues with this configuration:

  • The /var/log/app-name directory has to exist or there's a stacktrace on startup and no logs are written.
  • When deploy from Jenkins, I ran into permissions issues between deploys. To fix this, I chowned the directory before restarting Tomcat. chown -R tomcat /var/log/app-name /etc/init.d/tomcat start
Monitoring

While I was configuring the new services on our test server, I also installed hawtio at /console. I had previously configured it to run in Tomcat when running "mvn tomcat7:run":

<plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> <configuration> <path>/</path> <webapps> <webapp> <contextPath>/console</contextPath> <groupId>io.hawt</groupId> <artifactId>hawtio-web</artifactId> <version>1.4.19</version> <type>war</type> <asWebapp>true</asWebapp> </webapp> </webapps> </configuration> ... </plugin>

hawtio has a Camel plugin that's pretty slick. It shows all your routes and their runtime metrics; you can even edit the source code for routes. Even though I used a Java DSL, my routes are only editable as XML in hawtio. Claus Ibsen has a good post on Camel's new Metrics Component. I'd like to learn how to build a custom dashboard for hawtio - Claus's example looks pretty nice.

The Spring Boot plugin for hawtio is not nearly as graphic intensive. Instead, it just displays metrics and their values in a table format.

There's some good-looking Spring Boot Admin UIs out there, notably JHipster's and the one in spring-boot-admin. I hope the hawtio Spring Boot plugin gets prettier as it matures.

I wanted more than just monitoring, I wanted alerts when something went wrong. For that, I installed New Relic on our Tomcat server. I'm fond of getting the Monday reports, but they only showed activity when I was load testing.

I believe all these monitoring tools will be very useful once the app is in production. My last day with this client is next Friday, October 24. I'm trying to finish up the last couple of services this week and next. With any luck, their IBM Message Broker will be replaced this year.

Summary

This article shows how to use Gatling to load test a SOAP service and how to configure Log4j2 with Spring Boot. It also shows how hawtio can help monitor and configure a Camel application. I hope you enjoyed reading this series on what I learned about developing with Camel over the past several months. If you have stories about your experience with Camel (or similar integration frameworks), Gatling, hawtio or New Relic, I'd love to hear them.

It's been a great experience and I look forward to developing solid apps, built on open source, for my next client. I'd like to get back into HTML5, AngularJS and mobile development. I've had a good time with Spring Boot and JHipster this year and hope to use them again. I find myself using Java 8 more and more; my ideal next project would embrace it as a baseline. As for Scala and Groovy, I'm still a big fan and believe I can develop great apps with them.

If you're looking for a UI/API Architect that can help accelerate your projects, please let me know! You can learn more about my extensive experience from my LinkedIn profile.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-16

Thu, 2014-10-16 18:58
  • Landlords not liable for tenants’ water bills

    What an utter fuckup. Business as usual for Irish Water:

    However the spokeswoman said application packs for rented dwellings would be addressed to the landlord, at the landlord’s residence, and it would be the landlord’s responsibility to ensure the tenant received the application pack. Bills are to be issued quarterly, but as Irish Water will have the tenant’s PPS number, the utility firm will be able to pursue the tenant for any arrears and even apply any arrears to new accounts, when the tenant moves to a new address. Last week landlords had expressed concern over potential arrears, the liability for them and the possibility of being used as collection agents by Irish Water.

    (tags: landlords ireland irish-water tenancy rental ppsn)

  • Irish Water responds to landlords’ questions

    ugh, what a mess….

    * Every rental unit in the State is to get a pack addressed personally to the occupant. If Irish Water does not have details of a tenant, the pack will be addressed to ‘The Occupier’ * Packs will only be issued to individual rental properties in so far as Irish Water is aware of them * Landlords can contact Irish Water to advise they have let a property * Application Packs are issued relative to the information on the Irish Water mailing list. If this is incorrect or out of date, landlords can contact Irish Water to have the information adjusted *Irish Water will contact known landlords after the initial customer application campaign, to advise of properties for which no application has been received * Irish Water said that when a household is occupied the tenant is liable and when vacant the owner is liable. Both should advise Irish Water of change of status to the property – the tenant to cease liability, the landlord to take it up. Either party may take a reading and provide it to Irish Water, alternatively Irish Water will bill on average consumption, based on the date of change.

    (tags: irish-water water ireland liability bills landlords tenancy rental)

Categories: FLOSS Project Planets

Bryan Pendleton: He had me at "the Largest Ship in the World"

Thu, 2014-10-16 18:00

Don't miss Alastair Philip Wiper's photo-journalism essay about the building of the new Maersk Triple-E container vessels: Building the Largest Ship In the World, South Korea

The Daewoo Shipbuilding and Marine Engineering (DSME) shipyard in South Korea is the second largest shipbuilder in the world and one of the “Big Three” shipyards of South Korea, along with the Hyundai and Samsung shipyards. The shipyard, about an hour from Busan in the south of the country, employs about 46,000 people, and could reasonably be described as the worlds biggest Legoland. Smiling workers cycle around the huge shipyard as massive, abstractly over proportioned chunks of ships are craned around and set into place: the Triple E is just one small part of the output of the shipyard, as around 100 other vessels including oil rigs are in various stages of completion at the any time.
Categories: FLOSS Project Planets

Ioan Eugen Stan: Modular REST applications with Karaf features for OSGi Jax-RS Connector

Thu, 2014-10-16 16:03
The purpose of this article is to let you know how easy  it is to develop modular REST (JAX-RS) applications on OSGi, particularly Apache Karaf.

For some time I'm working on improving the way I deliver applications. My focus is on quality, ease of understanding and speed of delivery. My framework of choice for some time is the OSGi platform (mostly used on top of Karaf, but I'm getting bolder - going for bare-bone, native containers). 

Regarding web applications I admit I don't like sessions and I am strongly inclined to develop stateless applications. Since I like standards and the benefits they provide, my choice for a web framework has narrowed down to JAX-RS for which there are a few implementations. 

I came across a project called osgi-jax-rs-connector who's aim is to simplify web application  development using JAX-RS on OSGi. The way it works is you write your JAX-RS annotated resources and you publish them in the registry. Once there, the JAX-RS Publisher from osgi-jax-rs-connector will find them, take notice of the annotation and publish them.  That's it.

In the project README on github, you will find links to articles detailing the whole process.

All i did was to  add a features file for Apache Karaf so you can try it out easily. I've made a pull request with my code to make it part of the original code base and hopefully it will soon. 

I'll reproduce the steps below. You start by building the project and installing the features in Apache Karaf:

feature:repo-add mvn:com.eclipsesource.jaxrs/features/0.0.1-SNAPSHOT/xml/features
feature:install scr http
feature:install jax-rs-connector jax-rs-provider-moxy
install mvn:com.eclipsesource.jaxrs/jax-rs-sample/0.0.1-SNAPSHOTAfter this just go to: http://localhost:8181/sercices/greeting 

You can check the whole project on my github account in the mean time: step by step .

There are other solution out there for publishing JAX-RS resources using OSGi HttpService. Another interesting approach is Neil Bartlett's JAX-RS OSGi extender . The main advantage (in my opinion) of using the approach taken by Connector is the fact that you publish objects instead of the extender building them for you. This means that I am free to choose the way I build my object and I also have the opportunity to inject dependencies in it before I publish it  - hello CDI. I can build my objects using CDI via pax-cdi or with declarative services (as you can see in my sample code) and I am free to inject stuff in it before I expose it for registration with HttpService. That is a pretty powerful thing. I hope to show you how this is done soon. 

   
 
Categories: FLOSS Project Planets

Bryan Pendleton: Stuff I'm reading, mid-October edition

Wed, 2014-10-15 22:43

There was wind last night, but no rain.

Rain to the north, they say.

But not here.

  • Harvest and Yield: Not A Natural Cure for Tradeoff ConfusionYield is the availability metric that most practitioners end up working with, and it's worth noting that its different from CAP's A. The authors don't define it formally, but treat it as a long-term probability of response rather than the probability of a response conditioned on there being a failure. That's a good common-sense definition, and one that fits well with the way that most practitioners think about availability.
  • Apple's "Warrant-Proof" EncryptionCode is often buggy and insecure; the more code a system has, the less likely it is to be secure. This is an argument that has been made many times in this very context, ranging from debates over the Clipper Chip and key escrow in the 1990s to a recent paper by myself, Matt, Susan Landau, and Sandy Clark. The number of failures in such systems has been considerable; while it is certainly possible to write more secure code, there's no reason to think that Apple has done so here. (There's a brand-new report of a serious security hole in iOS.) Writing secure code is hard. The existence of the back door, then, enables certain crimes: computer crimes. Add to that the fact that the new version of iOS will include payment mechanisms and we see the risk of financial crimes as well.
  • Keyless SSL: The Nitty Gritty Technical DetailsExtending the TLS handshake in this way required changes to the NGINX server and OpenSSL to make the private key operation both remote and non-blocking (so NGINX can continue with other requests while waiting for the key server). Both the NGINX/OpenSSL changes, the protocol between the CloudFlare’s server, and the key server were audited by iSEC Partners and Matasano Security. They found the security of Keyless SSL equivalent to on-premise SSL. Keyless SSL has also been studied by academic researchers from both provable security and performance angles.
  • Intel® SGX for Dummies (Intel® SGX Design Objectives)At its root, Intel® SGX is a set of new CPU instructions that can be used by applications to set aside private regions of code and data. But looking at the technology upward from the instructions is analogous to trying to describe an animal by examining its DNA chain. In this short post I will try to uplevel things a bit by outlining the objectives that guided the design of Intel® SGX and provide some more detail on two of the objectives.
  • Ads Don't Work That WayThe key differentiating factor between the two mechanisms (inception and imprinting) is how conspicuous the ad needs to be. Insofar as an ad works by inception, its effect takes place entirely between the ad and an individual viewer; the ad doesn't need to be conspicuous at all. On the other hand, for an ad to work by cultural imprinting, it needs to be placed in a conspicuous location, where viewers will see it and know that others are seeing it too.
  • The ultimate weapon against GamerGate time-wasters: a 1960s chat bot that wastes their timeAlan Turing proposed that an artificial intelligence qualified as a capable of thought if a human subject, in conversation with it and another human, cannot tell them apart; the strange thing about the Eliza Twitter bot is it doesn't come across as any more like a machine than those who keep repeating their points over and over and over, ad nauseum. It's difficult to decide who's failed the Turing test here.
  • Gabriel Knight’s Creator Releases Incredible 20th Anniversary RemakeStaring at the remake version brings all those old memories of DOS mouse drivers and command prompts flooding back. Gazing at protagonist Gabriel Knight’s dazzling, polychromatic bookstore (your base of operations in New Orleans as the game begins) is like seeing the mental interpolation your brain made of the original pixelated wash beautifully, if weirdly, reified.
  • Bridge Troll I know this sounds a bit crazy, but trust me, there’s a troll up there! He or she, it’s tough to tell the gender of trolls, is approximately two feet tall, made of steel, and perched atop the southern end of the transverse concrete beam where the eastern cable makes contact with the road deck. The troll cannot be seen by car or from the bike path next to the bridge—you need to be underneath the bridge, on a boat to actually see the bridge troll.
  • Don’t Mourn the Passing of the New York Times Chess ColumnIf those who know enough about the game to understand the diagrams in a newspaper chess column can access thousands of times more information, free and instantly, than a weekly column could possibly provide, then why run one at all? The answer is that most weekly newspaper chess columns don’t need to exist and won’t in the near future. The one exception: when there’s an excellent writer and chess professional at the helm, someone like Robert Byrne.
  • Serbia vs. Albania in Belgrade brings their troubled history to the fore But even if football takes the headlines, there is still the sense that Tuesday night might be an opportunity missed. On October 22, Albanian Prime Minister Edi Rama will visit Belgrade to discuss bilateral relations with his Serbian counterpart, Aleksandar Vukic. No Albanian leader has visited Belgrade since Enver Hoxha in 1946.

    It is significant, and maybe it brings a glimmer of hope that a repeat of Tuesday's fixture might one day be all about the game instead. Having a harmonious football match to oil the conversation would have done little harm, but the anticipation of that noticeable absense inside Partizan Stadium stands as a reminder that sport does not always have the power to untangle wider complexities.

  • In TransitionWe picked 10 of the most progressive skaters to choose one location each and film a full part.
  • Things I Won't Work With: Dioxygen DifluorideThe paper goes on to react FOOF with everything else you wouldn't react it with: ammonia ("vigorous", this at 100K), water ice (explosion, natch), chlorine ("violent explosion", so he added it more slowly the second time), red phosphorus (not good), bromine fluoride, chlorine trifluoride (say what?), perchloryl fluoride (!), tetrafluorohydrazine (how on Earth. . .), and on, and on. If the paper weren't laid out in complete grammatical sentences and published in JACS, you'd swear it was the work of a violent lunatic. I ran out of vulgar expletives after the second page. A. G. Streng, folks, absolutely takes the corrosive exploding cake, and I have to tip my asbestos-lined titanium hat to him.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-15

Wed, 2014-10-15 18:58
Categories: FLOSS Project Planets

Bruce Snyder: More Yak Shaving to Install Git Via MacPorts But With MacOS X Mavericks

Wed, 2014-10-15 15:05
After performing a fresh install of MacOS X Mavericks on two computers recently. Because of the reinstall of the OS, I also had to reinstall a bunch of other tools including git. In doing so, I realized that the configurations that I had in my old .bash_profile would no longer work correctly and so I had to update the location of a couple files. 
I blogged about this previously so I had already done some research about installing MacPorts on a computer and I knew that I'd need to install the command-line tools for XCode first. So I installed XCode first using the App Store and then followed the instructions to install the XCode command-line tools. Once these were installed, then I was able to proceed with the installation of MacPorts. 
So I downloaded and installed MacPorts, ran sudo port selfupdate followed by sudo port install bash to get the latest version of bash. So that the OS can make use of this newer version of bash, I added it to the On MacOS X, I added the path to this newer version of bash to the /etc/shells file. To actually make use of this updated version of bash, I had to make my terminal aware of it. I always use iterm2 instead of the Apple terminal, it's just so much more powerful. So I changed my profile in iterm2 to launch this version of bash by adding the bash login command to be run when iterm2 opens a new terminal, /opt/local/bin/bash -l. Now when iterm2 starts up, it logs in to this newer version of bash automatically for me. Now I'm ready to install the git tools. 
To install git, I run the following command in the terminal that I've always run to install git via MacPorts:  Unfortunately this was unexpected, but the answer is right there in the error -- instead of installing git-core, just install git. So I ran the command again with a different name for git:
This got further but threw another error about the readline utility, so I had to force activate the readline utility. Beyond that, everything installed correctly.

However, the git bash completion didn't seen to work. I was seeing the following error from my .bash_profile when the bash prompt was getting set up:
bash: __git_ps1: command not found
It turns out, because the name of git in MacPorts was changed from git-core to just git, the paths to some of the git bash completion had changed as well. All I really had to do was update my .bash_profile from this:
/opt/local/share/git-core/contrib/completion/git-completion.bash
to this:
/opt/local/share/git/contrib/completion/git-completion.bash
and everything worked correctly.
Categories: FLOSS Project Planets

Bruce Snyder: Installing PostgreSQL 9.4 beta2 on Mac OS X 10.9.4 via MacPorts

Wed, 2014-10-15 15:01
After reading the blog post from EnterpriseDB about how Postgres Outperforms MongoDB, and because I have always preferred PostgreSQL to other databases, I had to check out the document handling capabilities that PostgreSQL has added recently.

Because I began using a newer computer this summer, I had not yet installed PostgreSQL. So I pulled up a previous post about installing PostgreSQL using MacPorts and did some searches to find the latest PostgreSQL. Below are the commands I ran.

First, I needed to figure out what the latest version of PostreSQL is in MacPorts:

This allowed me to see that PostgreSQL 9.4 beta2 is the latest version supported by MacPorts. So I embarked upon an installation of this version:
This install went off without a hitch, so I created a directory for the database and initialized the database:
From the ouptput of the installation, I copy/pasted the startup command and sent it to a file. I did the same for both the start and stop commands so that I have scripts to start and stop PG quickly:
After starting up PG for the first time, I opened another terminal in another tab to watch the log file to see if the database was started correctly:
Then I pulled up the docs via the local install of them (file:///opt/local/share/doc/postgresql94/html/index.html) and started digging into the document database support to play around.
Categories: FLOSS Project Planets

Bruce Snyder: How to Test For the Shellshock Vulnerability and Upgrade Bash Using MacPorts on Mac OS X 10.9.4

Wed, 2014-10-15 14:54
Given all the hype recently over the bash Shellshock vulnerability, no matter what operating system being used, any affected version of bash should be patched and/or upgraded immediately.

You can quickly test your operating system to see if your bash version is vulnerable by following instructions on the Shellshocker website. TLDR, here is the command you need to run to test bash on your machine:

Note that the version of bash in my path (the newer one from MacPorts) is not affected by the vuln. Now I will test the version of bash installed as /bin/bash:


Notice that I piped the script directly to /bin/bash instead of relying upon the version of bash in my PATH. Because I have already installed Apple's update (noted below), /bin/bash is not affected either.
Apple Update Apple has already released an update containing a patched bash version, so it's very easy to update the standard bash version located in /bin/bash. But, if you are like me and you are using MacPorts to manage many binaries within Mac OS X, you may not be using the version of bash installed by Apple.
Use of MacPorts to Upgrade BashI have used MacPorts for years and I continue to get grief from people who love Homebrew. I must say that I do like both, but for some reason I have always kept coming back to MacPorts. Anyway, if you are using MacPorts then upgrading to the patched version of bash is especially easy. Below are the commands to upgrade bash:


Notice that, first, I told MacPorts to update it's cache of upgraded ports, second, I told MacPorts to tell me what ports had been upgraded and, third, I told MacPorts to upgrade only the bash port and not every port that has an upgrade (as arbitrarily upgrading random binaries can have side effects).
Categories: FLOSS Project Planets

Heshan Suriyaarachchi: How to find which method called the current method at runtime

Wed, 2014-10-15 14:31

I’m using the following helper class to find which method called the current method (at runtime.)

import java.lang.reflect.Method;
import java.security.AccessController;
import java.security.PrivilegedAction;

public class TraceHelper {
private volatile static Method m;
private static Throwable t;

public TraceHelper() {
try {
m = Throwable.class.getDeclaredMethod("getStackTraceElement", int.class);
AccessController.doPrivileged(
new PrivilegedAction() {
public Object run() {
m.setAccessible(true);
return null;
}
}
);
t = new Throwable();
} catch (Exception e) {
e.printStackTrace();
}
}

public String getMethodName(final int depth, boolean useNew) {
return getMethod(depth, t != null && !useNew ? t : new Throwable());
}

public String getMethod(final int depth, Throwable t) {
try {
StackTraceElement element = (StackTraceElement) m.invoke(t, depth + 1);
return element.getClassName() + "$" + element.getMethodName();
} catch (Exception e) {
e.printStackTrace();
return null;
}
}

}
Then I use the following code inside my aspect to get the name of the method which called my current (executing) method.
String previousMethodName = new TraceHelper().getMethodName(2, false);
Categories: FLOSS Project Planets

Heshan Suriyaarachchi: Include/exclude sources when using aspect-maven-plugin

Wed, 2014-10-15 14:12

AspectJ mavn plugin will add all .java and .aj files in the project source directories by default. By using <include/> and <exclude/> tags, you can add filtering on top of that. 

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.5</version>
<configuration>
<complianceLevel>1.7</complianceLevel>
<source>1.7</source>
<target>1.7</target>

<sources>
<source>
<!--<basedir>src/main/java</basedir>-->
<!--<includes>-->
<!--<include>**/TransationAspect.java</include>-->
<!--</includes>-->
<excludes>
<exclude>**/DcXferHandler.java</exclude>
<!--<exclude>**/ChunkingSqlSerActor.java</exclude>-->
</excludes>
</source>
</sources>
</configuration>
<executions>
<execution>
<!--<phase>process-sources</phase>-->
<goals>
<goal>compile</goal>
<goal>test-compile</goal>
</goals>
</execution>
</executions>
</plugin>
Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part III: Integrating Spring 4 and Spring Boot

Wed, 2014-10-15 14:00

This article is the third in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. I used Apache Camel for several months this summer to create a number of SOAP services. These services performed various third-party data lookups for our customers. For previous articles, see Part I: The Inspiration and Part II: Creating and Testing Routes.

In late June, I sent an email to my client's engineering team. Its subject: "External Configuration and Microservices". I recommended we integrate Spring Boot into the Apache Camel project I was working on. I told them my main motivation was its external configuration feature. I also pointed out its container-less WAR feature, where Tomcat (or Jetty) is embedded in the WAR and you can start your app with "java -jar appname.war". I mentioned microservices and that Spring Boot would make it easy to split the project into a project-per-service structure if we wanted to go that route. I then asked two simple questions:

  1. Is it OK to integrate Spring Boot?
  2. Should I split the project into microservices?

Both of these suggestions were well received, so I went to work.

Spring 4

Before I integrated Spring Boot, I knew I had to upgrade to Spring 4. The version of Camel I was using (2.13.1) did not support Spring 4. I found issue CAMEL-7074 (Support spring 4.x) and added a comment to see when it would be fixed. After fiddling with dependencies and trying Camel 2.14-SNAPSHOT, I was able to upgrade to CXF 3.0. However, this didn't solve my problem. There were some API uncompatible changes between Spring 3.3.x and Spring 4.0.x and the camel-test-spring module wouldn't work with both. I proposed the following:

I think the easiest way forward is to create two modules: camel-test-spring and camel-test-spring3. The former compiles against Spring 4 and the latter against Spring 3. You could switch it so camel-test-spring defaults to Spring 3, but camel-test-spring4 doesn't seem to be forward-looking, as you hopefully won't need a camel-test-spring5.

I've made this change in a fork and it works in my project. I can upgrade to Camel 2.14-SNAPSHOT and CXF 3.0 with Spring 3.2.8 (by using camel-test-spring3). I can also upgrade to Spring 4 if I use the upgraded camel-test-spring.

Here's a pull request that has this change: https://github.com/apache/camel/pull/199

The Camel team integrated my suggested change a couple weeks later. Unfortunately, a similar situation happened with Spring 4.1, so you'll have to wait for Camel 2.15 if you want to use Spring 4.1.

After making a patched 2.14-SNAPSHOT version available to my project, I was able to upgrade to Spring 4 and CXF 3 with a few minor changes to my pom.xml.

<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> - <camel.version>2.13.1</camel.version> - <cxf.version>2.7.11</cxf.version> - <spring.version>3.2.8.RELEASE</spring.version> + <camel.version>2.14-SNAPSHOT</camel.version> + <cxf.version>3.0.0</cxf.version> + <spring.version>4.0.5.RELEASE</spring.version> </properties> ... + <!-- upgrade camel-spring dependencies --> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-context</artifactId> + <version>${spring.version}</version> + </dependency> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-aop</artifactId> + <version>${spring.version}</version> + </dependency> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-tx</artifactId> + <version>${spring.version}</version> + </dependency>

I also had to change some imports for CXF 3.0 since it includes a new major version of Apache WSS4J (2.0.0).

-import org.apache.ws.security.handler.WSHandlerConstants; +import org.apache.wss4j.dom.handler.WSHandlerConstants; ... -import org.apache.ws.security.WSPasswordCallback; +import org.apache.wss4j.common.ext.WSPasswordCallback;

After getting everything upgraded, I continued developing services for the next couple weeks.

Spring Boot

In late July, I integrated Spring Boot. It was fairly straightforward and mostly consisted of adding/removing dependencies and removing versions already defined in Boot's starter-parent.

+ <parent> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-parent</artifactId> + <version>1.1.4.RELEASE</version> + </parent> ... <cxf.version>3.0.1</cxf.version> + <java.version>1.7</java.version> + <servlet-api.version>3.1.0</servlet-api.version> <spring.version>4.0.6.RELEASE</spring.version> ... - <artifactId>maven-compiler-plugin</artifactId> - <version>2.5.1</version> - <configuration> - <source>1.7</source> - <target>1.7</target> - </configuration> - </plugin> - <plugin> - <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> </plugin> + <plugin> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-maven-plugin</artifactId> + </plugin> </plugins> </build> <dependencies> + <!-- spring boot --> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-actuator</artifactId> + <exclusions> + <exclusion> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-logging</artifactId> + </exclusion> + </exclusions> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-log4j</artifactId> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-tomcat</artifactId> + <scope>provided</scope> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-web</artifactId> + </dependency> <!-- camel --> ... - <!-- upgrade camel-spring dependencies --> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-context</artifactId> - <version>${spring.version}</version> - </dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-aop</artifactId> - <version>${spring.version}</version> - </dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-tx</artifactId> - <version>${spring.version}</version> - </dependency> ... - <!-- logging --> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-api</artifactId> - <version>1.7.6</version> - </dependency> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-log4j12</artifactId> - <version>1.7.6</version> - </dependency> - <dependency> - <groupId>log4j</groupId> - <artifactId>log4j</artifactId> - <version>1.2.17</version> - </dependency> - <!-- utilities --> <dependency> <groupId>joda-time</groupId> <artifactId>joda-time</artifactId> - <version>2.3</version> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> - <version>1.4</version> ... <!-- testing --> <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-test</artifactId> + <exclusions> + <exclusion> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-logging</artifactId> + </exclusion> + </exclusions> + </dependency> + <dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-test</artifactId> - <version>${spring.version}</version> - <scope>test</scope> - </dependency> - <dependency> - <groupId>org.mockito</groupId> - <artifactId>mockito-core</artifactId> - <version>1.9.5</version> - <scope>test</scope> - </dependency>

Next, I deleted the AppInitializer.java class I mentioned in Part II and added an Application.java class.

import org.apache.cxf.transport.servlet.CXFServlet; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration; import org.springframework.boot.autoconfigure.jdbc.DataSourceTransactionManagerAutoConfiguration; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.context.embedded.ConfigurableEmbeddedServletContainer; import org.springframework.boot.context.embedded.EmbeddedServletContainerCustomizer; import org.springframework.boot.context.embedded.ErrorPage; import org.springframework.boot.context.embedded.ServletRegistrationBean; import org.springframework.boot.context.web.SpringBootServletInitializer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.http.HttpStatus; @Configuration @EnableAutoConfiguration(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class}) @ComponentScan public class Application extends SpringBootServletInitializer { public static void main(String[] args) { SpringApplication.run(Application.class, args); } @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(Application.class); } @Bean public ServletRegistrationBean servletRegistrationBean() { CXFServlet servlet = new CXFServlet(); return new ServletRegistrationBean(servlet, "/api/*"); } @Bean public EmbeddedServletContainerCustomizer containerCustomizer() { return new EmbeddedServletContainerCustomizer() { @Override public void customize(ConfigurableEmbeddedServletContainer container) { ErrorPage error401Page = new ErrorPage(HttpStatus.UNAUTHORIZED, "/401.html"); ErrorPage error404Page = new ErrorPage(HttpStatus.NOT_FOUND, "/404.html"); ErrorPage error500Page = new ErrorPage(HttpStatus.INTERNAL_SERVER_ERROR, "/500.html"); container.addErrorPages(error401Page, error404Page, error500Page); } }; } }

The error pages you see configured above were configured and copied from Tim Sporcic's Custom Error Pages with Spring Boot.

Dynamic DataSources

I excluded the DataSource-related AutoConfiguration classes because this application had many datasources. It also had a requirement to allow datasources to be added on-the-fly by simply editing application.properties. I asked how to do this on Stack Overflow and received an excellent answer from Stéphane Nicoll.

Spring Boot Issues

I did encounter a couple issues after integrating Spring Boot. The first was that it was removing the content-* headers for CXF responses. This only happened when running the WAR in Tomcat and I was able to figure out a workaround with a custom ResponseWrapper and Filter. This issue was fixed in Spring Boot 1.1.6.

The other issue was that the property override feature didn't seem to work when setting environment variables. The workaround was to create a setenv.sh script in $CATALINA_HOME/bin and add the environment variables there. See section 3.4 of Tomcat 7's RUNNING.txt for more information.

SOAP Faults

After upgrading to Spring 4 and integrating Spring Boot, I continued migrating IBM Message Broker flows. My goal was to make all new services backward compatible, but I ran into an issue. With the new services, SOAP Faults were sent back to the client instead of error messages in a SOAP Message. I suggested we fix it with one of two ways:

  1. Modify the client so it looks for SOAP Faults and handles them appropriately.
  2. Modify the new services so messages are returned instead of faults.

For #2, I learned how do to convert from a fault to messages on the Camel user mailing list. However, the team opted to improve the client and we added fault handling there instead.

Microservice Deployment

When I first integrated Spring Boot, I was planning on splitting our project into a project-per-service. This would allow each service to evolve on its own, instead of having a monolithic war that contains all the services. In team discussions, there was some concern about the memory overhead of running multiple instances instead of one.

I pointed out an interesting thread on the Camel mailing list about deploying routes with a route-per-jvm or all in the same JVM. The recommendation from that thread was to bundle similar routes together if you were to split them.

In the end, we decided to allow our Operations team decide how they wanted to manage/deploy everything. I mentioned that Spring Boot can work with Tomcat, Jetty, JBoss and even cloud providers like Heroku and Cloud Foundry. I estimated that splitting the project apart would take less than a day, as would making it back into a monolithic WAR.

Summary

This article explains how we upgraded our Apache Camel application to Spring 4 and integrated Spring Boot. There was a bit of pain getting things to work, but nothing a few pull requests and workarounds couldn't fix. We discovered some issues with setting environment variables for Tomcat and opted not to split our project into small microservices. Hopefully this article will help people trying to Camelize a Spring Boot application .

In the next article, I'll talk about load testing with Gatling, logging with Log4j2 and monitoring with hawtio and New Relic.

Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part II: Creating and Testing Routes

Wed, 2014-10-15 14:00

This article is the second in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. The first article, Developing Services with Apache Camel - Part I: The Inspiration, describes why I chose Camel for this project.

To make sure these new services correctly replaced existing services, a 3-step approach was used:

  1. Write an integration test pointing to the old service.
  2. Write the implementation and a unit test to prove it works.
  3. Write an integration test pointing to the new service.

I chose to start by replacing the simplest service first. It was a SOAP Service that talked to a database to retrieve a value based on an input parameter. To learn more about Camel and how it works, I started by looking at the CXF Tomcat Example. I learned that Camel is used to provide routing of requests. Using its CXF component, it can easily produce SOAP web service endpoints. An end point is simply an interface, and Camel takes care of producing the implementation.

Legacy Integration Test

I started by writing a LegacyDrugServiceTests integration test for the old drug service. I tried two different ways of testing, using WSDL-generated Java classes, as well as using JAX-WS's SOAP API. Finding the WSDL for the legacy service was difficult because IBM Message Broker doesn't expose it when adding "?wsdl" to the service's URL. Instead, I had to dig through the project files until I found it. Then I used the cxf-codegen-plugin to generate the web service client. Below is what one of the tests looked like that uses the JAX-WS API.

@Test public void sendGPIRequestUsingSoapApi() throws Exception { SOAPElement bodyChildOne = getBody(message).addChildElement("gpiRequest", "m"); SOAPElement bodyChildTwo = bodyChildOne.addChildElement("args0", "m"); bodyChildTwo.addChildElement("NDC", "ax22").addTextNode("54561237201"); SOAPMessage reply = connection.call(message, getUrlWithTimeout(SERVICE_NAME)); if (reply != null) { Iterator itr = reply.getSOAPBody().getChildElements(); Map resultMap = TestUtils.getResults(itr); assertEquals("66100525123130", resultMap.get("GPI")); } } Implementing the Drug Service

In the last article, I mentioned I wanted no XML in the project. To facilitate this, I used Camel's Java DSL to define routes and Spring's JavaConfig to configure dependencies.

The first route I wrote was one that looked up a GPI (Generic Product Identifier) by NDC (National Drug Code).

@WebService public interface DrugService { @WebMethod(operationName = "gpiRequest") GpiResponse findGpiByNdc(GpiRequest request); }

To expose this as a web service endpoint with CXF, I needed to do two things:

  1. Tell Spring how to configure CXF by importing "classpath:META-INF/cxf/cxf.xml" into a @Configuration class.
  2. Configure CXF's Servlet so endpoints can be served up at a particular URL.

To satisfy item #1, I created a CamelConfig class that extends CamelConfiguration. This class allows Camel to be configured by Spring's JavaConfig. In it, I imported the CXF configuration, allowed tracing to be configured dynamically, and exposed my application.properties to Camel. I also set it up (with @ComponentScan) to look for Camel routes annotated with @Component.

@Configuration @ImportResource("classpath:META-INF/cxf/cxf.xml") @ComponentScan("com.raibledesigns.camel") public class CamelConfig extends CamelConfiguration { @Value("${logging.trace.enabled}") private Boolean tracingEnabled; @Override protected void setupCamelContext(CamelContext camelContext) throws Exception { PropertiesComponent pc = new PropertiesComponent(); pc.setLocation("classpath:application.properties"); camelContext.addComponent("properties", pc); // see if trace logging is turned on if (tracingEnabled) { camelContext.setTracing(true); } super.setupCamelContext(camelContext); } @Bean public Tracer camelTracer() { Tracer tracer = new Tracer(); tracer.setTraceExceptions(false); tracer.setTraceInterceptors(true); tracer.setLogName("com.raibledesigns.camel.trace"); return tracer; } }

CXF has a servlet that's responsible for serving up its services at common path. To map CXF's servlet, I leveraged Spring's WebApplicationInitializer in an AppInitializer class. I decided to serve up everything from a /api/* base URL.

package com.raibledesigns.camel.config; import org.apache.cxf.transport.servlet.CXFServlet; import org.springframework.web.WebApplicationInitializer; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRegistration; public class AppInitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext servletContext) throws ServletException { servletContext.addListener(new ContextLoaderListener(getContext())); ServletRegistration.Dynamic servlet = servletContext.addServlet("CXFServlet", new CXFServlet()); servlet.setLoadOnStartup(1); servlet.setAsyncSupported(true); servlet.addMapping("/api/*"); } private AnnotationConfigWebApplicationContext getContext() { AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext(); context.setConfigLocation("com.raibledesigns.camel.config"); return context; } }

To implement this web service with Camel, I created a DrugRoute class that extends Camel's RouteBuilder.

@Component public class DrugRoute extends RouteBuilder { private String uri = "cxf:/drugs?serviceClass=" + DrugService.class.getName(); @Override public void configure() throws Exception { from(uri) .recipientList(simple("direct:${header.operationName}")); from("direct:gpiRequest").routeId("gpiRequest") .process(new Processor() { public void process(Exchange exchange) throws Exception { // get the ndc from the input String ndc = exchange.getIn().getBody(GpiRequest.class).getNDC(); exchange.getOut().setBody(ndc); } }) .to("sql:{{sql.selectGpi}}") .to("log:output") .process(new Processor() { public void process(Exchange exchange) throws Exception { // get the gpi from the input List<HashMap> data = (ArrayList<HashMap>) exchange.getIn().getBody(); DrugInfo drug = new DrugInfo(); if (data.size() > 0) { drug = new DrugInfo(String.valueOf(data.get(0).get("GPI"))); } GpiResponse response = new GpiResponse(drug); exchange.getOut().setBody(response); } }); } }

The sql.selectGpi property is read from src/main/resources/application.properties and looks as follows:

sql.selectGpi=select GPI from drugs where ndc = #?dataSource=ds.drugs

The "ds.drugs" reference is to a datasource that's created by Spring. From my AppConfig class:

@Configuration @PropertySource("classpath:application.properties") public class AppConfig { @Value("${ds.driver.db2}") private String jdbcDriverDb2; @Value("${ds.password}") private String jdbcPassword; @Value("${ds.url}") private String jdbcUrl; @Value("${ds.username}") private String jdbcUsername; @Bean(name = "ds.drugs") public DataSource drugsDataSource() { return createDataSource(jdbcDriverDb2, jdbcUsername, jdbcPassword, jdbcUrl); } private BasicDataSource createDataSource(String driver, String username, String password, String url) { BasicDataSource ds = new BasicDataSource(); ds.setDriverClassName(driver); ds.setUsername(username); ds.setPassword(password); ds.setUrl(url); ds.setMaxActive(100); ds.setMaxWait(1000); ds.setPoolPreparedStatements(true); return ds; } } Unit Testing

The hardest part about unit testing this route was figuring out how to use Camel's testing support. I posted a question to the Camel users mailing list in early June. Based on advice received, I bought Camel in Action, read chapter 6 on testing and went to work. I wanted to eliminate the dependency on a datasource, so I used Camel's AdviceWith feature to modify my route and intercept the SQL call. This allowed me to return pre-defined results and verify everything worked.

@RunWith(CamelSpringJUnit4ClassRunner.class) @ContextConfiguration(loader = CamelSpringDelegatingTestContextLoader.class, classes = CamelConfig.class) @DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD) @UseAdviceWith public class DrugRouteTests { @Autowired CamelContext camelContext; @Produce ProducerTemplate template; @EndpointInject(uri = "mock:result") MockEndpoint result; static List<Map> results = new ArrayList<Map>() {{ add(new HashMap<String, String>() {{ put("GPI", "123456789"); }}); }}; @Before public void before() throws Exception { camelContext.setTracing(true); ModelCamelContext context = (ModelCamelContext) camelContext; RouteDefinition route = context.getRouteDefinition("gpiRequest"); route.adviceWith(context, new RouteBuilder() { @Override public void configure() throws Exception { interceptSendToEndpoint("sql:*").skipSendToOriginalEndpoint().process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(results); } }); } }); route.to(result); camelContext.start(); } @Test public void testMockSQLEndpoint() throws Exception { result.expectedMessageCount(1); GpiResponse expectedResult = new GpiResponse(new DrugInfo("123456789")); result.allMessages().body().contains(expectedResult); GpiRequest request = new GpiRequest(); request.setNDC("123"); template.sendBody("direct:gpiRequest", request); MockEndpoint.assertIsSatisfied(camelContext); } }

I found AdviceWith to be extremely useful as I developed more routes and tests in this project. I used its weaveById feature to intercept calls to stored procedures, replace steps in my routes and remove steps I didn't want to test. For example, in one route, there was a complicated workflow to interact with a customer's data.

  1. Call a stored procedure in a remote database, which then inserts a record into a temp table.
  2. Lookup that data using the value returned from the stored procedure.
  3. Delete the record from the temp table.
  4. Parse the data (as CSV) since the returned value is ~ delimited.
  5. Convert the parsed data into objects, then do database inserts in a local database (if data doesn't exist).

To make matters worse, remote database access was restricted by IP address. This meant that, while developing, I couldn't even manually test from my local machine. To solve this, I used the following:

  • interceptSendToEndpoint("bean:*") to intercept the call to my stored procedure bean.
  • weaveById("myJdbcProcessor").before() to replace the temp table lookup with a CSV file.
  • Mockito to mock a JdbcTemplate that does the inserts.

To figure out how to configure and execute stored procedures in a route, I used the camel-store-procedure project on GitHub. Mockito's ArgumentCaptor also became very useful when developing a route that called a 3rd-party web service within a route. James Carr has more information on how you might use this to verify values on an argument.

To see if my tests were hitting all aspects of the code, I integrated the cobertura-maven-plugin for code coverage reports (generated by running mvn site).

<build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <configuration> <instrumentation> <excludes> <exclude>**/model/*.class</exclude> <exclude>**/AppInitializer.class</exclude> <exclude>**/StoredProcedureBean.class</exclude> <exclude>**/SoapActionInterceptor.class</exclude> </excludes> </instrumentation> <check/> </configuration> <version>2.6</version> </plugin> ... <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.6</version> </plugin> Integration Testing

Writing an integration test was fairly straightforward. I created a DrugRouteITest class, a client using CXF's JaxWsProxyFactoryBean and called the method on the service.

public class DrugRouteITest { private static final String URL = "http://localhost:8080/api/drugs"; protected static DrugService createCXFClient() { JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean(); factory.setBindingId("http://schemas.xmlsoap.org/wsdl/soap12/"); factory.setServiceClass(DrugService.class); factory.setAddress(getTestUrl(URL)); return (DrugService) factory.create(); } @Test public void findGpiByNdc() throws Exception { // create input parameter GpiRequest input = new GpiRequest(); input.setNDC("54561237201"); // create the webservice client and send the request DrugService client = createCXFClient(); GpiResponse response = client.findGpiByNdc(input); assertEquals("66100525123130", response.getDrugInfo().getGPI()); } }

This integration test is only run after Tomcat has started and deployed the app. Unit tests are run by Maven's surefire-plugin, while integration tests are run by the failsafe-plugin. An available Tomcat port is determined by the build-helper-maven-plugin. This port is set as a system property and read by the getTestUrl() method call above.

public static String getTestUrl(String url) { if (System.getProperty("tomcat.http.port") != null) { url = url.replace("8080", System.getProperty("tomcat.http.port")); } return url; }

Below are the relevant bits from pom.xml that determines when to start/stop Tomcat, as well as which tests to run.

<plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> <configuration> <path>/</path> </configuration> <executions> <execution> <id>start-tomcat</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> <configuration> <fork>true</fork> <port>${tomcat.http.port}</port> </configuration> </execution> <execution> <id>stop-tomcat</id> <phase>post-integration-test</phase> <goals> <goal>shutdown</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.17</version> <configuration> <excludes> <exclude>**/*IT*.java</exclude> <exclude>**/Legacy**.java</exclude> </excludes> <includes> <include>**/*Tests.java</include> <include>**/*Test.java</include> </includes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.17</version> <configuration> <includes> <include>**/*IT*.java</include> </includes> <systemProperties> <tomcat.http.port>${tomcat.http.port}</tomcat.http.port> </systemProperties> </configuration> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> </plugin>

The most useful part of integration testing came when I copied one of my legacy tests into it and started verifying backwards compatibility. Since we wanted to replace existing services, and require no client changes, I had to make the XML request and response match. Charles was very useful for this exercise, letting me inspect the request/response and tweak things to match. The following JAX-WS annotations allowed me to change the XML element names and achieve backward compatibility.

  • @BindingType(SOAPBinding.SOAP12HTTP_BINDING)
  • @WebResult(name = "return", targetNamespace = "...")
  • @ResponseWrapper(localName = "gpiResponse")
  • @WebParam(name = "args0", targetNamespace = "...")
  • @XmlElement(name = "...")
Continuous Integration and Deployment

My next item of business was configuring a job in Jenkins to continually test and deploy. Getting all the tests to pass was easy, and deploying to Tomcat was simple enough thanks to the Deploy Plugin and this article. However, after a few deploys, Tomcat would throw OutOfMemory exceptions. Therefore, I ended up creating a second "deploy" job that stops Tomcat, copies the successfully-built WAR to $CATALINA_HOME/webapps, removes $CATALINA_HOME/webapps/ROOT and restarts Tomcat. I used Jenkins "Execute shell" feature to configure these three steps. I was pleased to find my /etc/init.d/tomcat script still worked for starting Tomcat at boot time and providing convenient start/stop commands.

Summary

This article shows you how I implemented and tested a simple Apache Camel route. The route described only does a simple database lookup, but you can see how Camel's testing support allows you to mock results and concentrate on developing your route logic. I found its testing framework very useful and not well documented, so hopefully this article helps to fix that. In the next article, I'll talk about upgrading to Spring 4, integrating Spring Boot and our team's microservice deployment discussions.

Categories: FLOSS Project Planets

Sergey Beryozkin: CXF becomes friends with Tika and Lucene

Wed, 2014-10-15 04:59
You may have been thinking for a while: would it actually be cool to get some experience with Apache Lucene and Apache Tika and enhance the JAX-RS services you work upon along the way ? Lucene and Tika are those cool projects people are talking about but as it happens there has never been an opportunity to use them in your project...

Apache Lucene is a well known project where its community keeps innovating with improving and optimizing the capabilities of various text analyzers. Apache Tika is a cool project which can be used to get the metadata and content out of binary resources with formats such as PDF, ODT, etc, with lots of other formats being supported. As a side note, Apache Tika is not only a cool project, it is also a very democratic project where everyone is welcomed from the get go - the perfect project to start your Apache career if you think of starting involved into one of the Apache projects.

Now, a number of services you have written may be supporting uploads of the binary resources, for example, you may have a JAX-RS server accepting multipart/form-data uploads.

As it happens, Lucene plus Tika is what one needs to be able to analyze the binary content easily and effectively. Tika would give you the metadata and the content, Lucene will tokenize it and help search over it. As such you can let your users search and download only those PDF or other binary resources which match the search query. It is something your users will appreciate.

CXF 3.1.0 which is under the active development offers a utility support for working with Tika and Lucene. Andriy Redko worked on improving the integration with Lucene and introducing a content extraction support with the help of  Tika. It is all shown in a nice jax_rs/search demo which offers a Bootstrap UI for uploading, searching and downloading of PDF and ODT files. The demo will be shipped in the CXF distribution.  

Please start experimenting today with the demo (download CXF 3.1.0-SNAPSHOT distribution), let us know what you think, and get your JAX-RS project to the next level.

You are also encouraged to experiment with Apache Solr which offers an  advanced search engine on top of Lucene, with Tika also being utilized.

Enjoy!      






Categories: FLOSS Project Planets

Heshan Suriyaarachchi: Stackmap frame errors when building the aspectj project with Java 1.7

Wed, 2014-10-15 01:15

I had a project which used aspectj and it was building fine with Java 1.6. When I updated it to Java 1.7 I saw the following error.

[INFO] Molva the Destroyer Aspects ....................... FAILURE [2.324s]
[INFO] Molva The Destroyer Client ........................ SKIPPED
[INFO] Molva The Destroyer Parent ........................ SKIPPED
[INFO] Molva The Destroyer Distribution .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.424s
[INFO] Finished at: Tue Oct 14 11:16:19 PDT 2014
[INFO] Final Memory: 12M/310M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.1:java (default) on project molva-the-destroyer-aspects: An exception occured while executing the Java class. Expecting a stackmap frame at branch target 30
[ERROR] Exception Details:
[ERROR] Location:
[ERROR] com/concur/puma/molva/aspects/TestTarget.main([Ljava/lang/String;)V @12: invokestatic
[ERROR] Reason:
[ERROR] Expected stackmap frame at this location.
[ERROR] Bytecode:
[ERROR] 0000000: 2a4d b200 5e01 012c b800 644e b800 c62d
[ERROR] 0000010: b600 ca2c 2db8 00bb 2db8 00bf 57b1 3a04
[ERROR] 0000020: b800 c62d 1904 b600 ce19 04bf
[ERROR] Exception Handler Table:
[ERROR] bci [12, 30] => handler: 30
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
My maven configuration looked like below.
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.6.5</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.perf4j</groupId>
<artifactId>perf4j</artifactId>
<version>0.9.16</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>java</goal>
</goals>
</execution>
</executions>
<configuration>
<mainClass>com.concur.puma.molva.aspects.TestTarget</mainClass>
</configuration>
</plugin>
</plugins>
</build>

Fix The default compliance level for aspectj-maven-plugin is 1.4 according to http://mojo.codehaus.org/aspectj-maven-plugin/compile-mojo.html#complianceLevel. Since I did not have that tag specified, the build was using the default value. Once I inserted the tag into the configuration, the build was successful.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-14

Tue, 2014-10-14 18:58
  • Dublin’s Best-Kept Secret: Blas Cafe

    looks great, around the corner from Cineworld on King’s Inn St, D1

    (tags: dublin cafes food blas-cafe eating northside)

  • “Meta-Perceptual Helmets For The Dead Zoo”

    with Neil McKenzie, Nov 9-16 2014, in the National History Museum in Dublin: ‘These six helmets/viewing devices start off by exploring physical conditions of viewing: if we have two eyes, they why is our vision so limited? Why do we have so little perception of depth? Why don’t our two eyes offer us two different, complementary views of the world around us? Why can’t they extend from our body so we can see over or around things? Why don’t they allow us to look behind and in front at the same time, or sideways in both directions? Why can’t our two eyes simultaneously focus on two different tasks? Looking through Michael Land’s defining work Animal Eyes, we see that nature has indeed explored all of these possibilities: a Hammerhead Shark has hyper-stereo vision; a horse sees 350° around itself; a chameleon has separately rotatable eyes… The series of Meta-Perceptual Helmets do indeed explore these zoological typologies: proposing to humans the hyper-stereo vision of the hammerhead shark; or the wide peripheral vision of the horse; or the backward/forward vision of the chameleon… but they also take us into the unnatural world of mythology and literature: the Cheshire Cat Helmet is so called because of the strange lingering effect of dominating visual information such as a smile or the eyes; the Cyclops allows one large central eye to take in the world around while a second tiny hidden eye focuses on a close up task (why has the creature never evolved that can focus on denitting without constantly having to glance around?).’ (via Emma)

    (tags: perception helmets dublin ireland museums dead-zoo sharks eyes vision art)

  • Grade inflation figures from Irish universities

    The figures show that, between 2004 and 2013, an average of 71.7 per cent of students at TCD graduated with either a 1st or a 2.1. DCU and UCC had the next highest rate of such awards (64.3 per cent and 64.2 per cent respectively), followed by UCD (55.8 per cent), NUI Galway (54.7 per cent), Maynooth University (53.7 per cent) and University of Limerick (50.2 per cent).

    (tags: tcd grades grade-inflation dcu ucc ucd ireland studies academia third-level)

  • webrtcH4cKS: ~ coTURN: the open-source multi-tenant TURN/STUN server you were looking for

    Last year we interviewed Oleg Moskalenko and presented the rfc5766-turn-server project, which is a free open source and extremely popular implementation of TURN and STURN server. A few months later we even discovered Amazon is using this project to power its Mayday service. Since then, a number of features beyond the original RFC 5766 have been defined at the IETF and a new open-source project was born: the coTURN project.

    (tags: webrtc turn sturn rfc-5766 push nat stun firewalls voip servers internet)

  • Google Online Security Blog: This POODLE bites: exploiting the SSL 3.0 fallback

    Today we are publishing details of a vulnerability in the design of SSL version 3.0. This vulnerability allows the plaintext of secure connections to be calculated by a network attacker. ouch.

    (tags: ssl3 ssl tls security exploits google crypto)

Categories: FLOSS Project Planets

Heshan Suriyaarachchi: Compile aspectj project containing Java 1.7 Source

Tue, 2014-10-14 16:47

Following maven configuration let’s you compile a project with Java 1.7 source.
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<!--<version>1.6.5</version>-->
<version>1.8.2</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.7</version>
<configuration>
<complianceLevel>1.7</complianceLevel>
<source>1.7</source>
<target>1.7</target>
</configuration>
<executions>
<execution>
<!--<phase>process-sources</phase>-->
<goals>
<goal>compile</goal>
<goal>test-compile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Categories: FLOSS Project Planets

Chris Hostetter: Stump the Chump is Coming to D.C.!

Tue, 2014-10-14 16:38

In just under a month, Lucene/Solr Revolution will be coming to Washington D.C. — and once again, I’ll be in the hot seat for Stump The Chump.

If you are not familiar with “Stump the Chump” it’s a Q&A style session where “The Chump” (That’s Me!) is put on the spot with tough, challenging, unusual questions about Lucene & Solr — live, on stage, in front of hundreds of rambunctious convention goers, with judges who have all seen and thought about the questions in advance and get to mock The Chump (still me) and award prizes to people whose questions do the best job of “Stumping The Chump”.

People frequently tell me it’s the most fun they’ve ever had at a Tech Conference — You can judge for yourself by checking out the videos from last years events: Lucene/Solr Revolution 2013 in Dublin, and Lucene/Solr Revolution 2013 in San Diego.

I’ll be posting more details in the weeks ahead, but until then you can subscribe to this blog (or just the “Chump” tag) to stay informed.

And if you haven’t registered for Lucene/Solr Revolution yet, what are you waiting for?!?!

The post Stump the Chump is Coming to D.C.! appeared first on Lucidworks.

Categories: FLOSS Project Planets