Planet Apache

Syndicate content
Updated: 5 hours 4 min ago

Matt Raible: Developing Services with Apache Camel - Part III: Integrating Spring 4 and Spring Boot

Wed, 2014-10-15 14:00

This article is the third in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. I used Apache Camel for several months this summer to create a number of SOAP services. These services performed various third-party data lookups for our customers. For previous articles, see Part I: The Inspiration and Part II: Creating and Testing Routes.

In late June, I sent an email to my client's engineering team. Its subject: "External Configuration and Microservices". I recommended we integrate Spring Boot into the Apache Camel project I was working on. I told them my main motivation was its external configuration feature. I also pointed out its container-less WAR feature, where Tomcat (or Jetty) is embedded in the WAR and you can start your app with "java -jar appname.war". I mentioned microservices and that Spring Boot would make it easy to split the project into a project-per-service structure if we wanted to go that route. I then asked two simple questions:

  1. Is it OK to integrate Spring Boot?
  2. Should I split the project into microservices?

Both of these suggestions were well received, so I went to work.

Spring 4

Before I integrated Spring Boot, I knew I had to upgrade to Spring 4. The version of Camel I was using (2.13.1) did not support Spring 4. I found issue CAMEL-7074 (Support spring 4.x) and added a comment to see when it would be fixed. After fiddling with dependencies and trying Camel 2.14-SNAPSHOT, I was able to upgrade to CXF 3.0. However, this didn't solve my problem. There were some API uncompatible changes between Spring 3.3.x and Spring 4.0.x and the camel-test-spring module wouldn't work with both. I proposed the following:

I think the easiest way forward is to create two modules: camel-test-spring and camel-test-spring3. The former compiles against Spring 4 and the latter against Spring 3. You could switch it so camel-test-spring defaults to Spring 3, but camel-test-spring4 doesn't seem to be forward-looking, as you hopefully won't need a camel-test-spring5.

I've made this change in a fork and it works in my project. I can upgrade to Camel 2.14-SNAPSHOT and CXF 3.0 with Spring 3.2.8 (by using camel-test-spring3). I can also upgrade to Spring 4 if I use the upgraded camel-test-spring.

Here's a pull request that has this change: https://github.com/apache/camel/pull/199

The Camel team integrated my suggested change a couple weeks later. Unfortunately, a similar situation happened with Spring 4.1, so you'll have to wait for Camel 2.15 if you want to use Spring 4.1.

After making a patched 2.14-SNAPSHOT version available to my project, I was able to upgrade to Spring 4 and CXF 3 with a few minor changes to my pom.xml.

<properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> - <camel.version>2.13.1</camel.version> - <cxf.version>2.7.11</cxf.version> - <spring.version>3.2.8.RELEASE</spring.version> + <camel.version>2.14-SNAPSHOT</camel.version> + <cxf.version>3.0.0</cxf.version> + <spring.version>4.0.5.RELEASE</spring.version> </properties> ... + <!-- upgrade camel-spring dependencies --> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-context</artifactId> + <version>${spring.version}</version> + </dependency> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-aop</artifactId> + <version>${spring.version}</version> + </dependency> + <dependency> + <groupId>org.springframework</groupId> + <artifactId>spring-tx</artifactId> + <version>${spring.version}</version> + </dependency>

I also had to change some imports for CXF 3.0 since it includes a new major version of Apache WSS4J (2.0.0).

-import org.apache.ws.security.handler.WSHandlerConstants; +import org.apache.wss4j.dom.handler.WSHandlerConstants; ... -import org.apache.ws.security.WSPasswordCallback; +import org.apache.wss4j.common.ext.WSPasswordCallback;

After getting everything upgraded, I continued developing services for the next couple weeks.

Spring Boot

In late July, I integrated Spring Boot. It was fairly straightforward and mostly consisted of adding/removing dependencies and removing versions already defined in Boot's starter-parent.

+ <parent> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-parent</artifactId> + <version>1.1.4.RELEASE</version> + </parent> ... <cxf.version>3.0.1</cxf.version> + <java.version>1.7</java.version> + <servlet-api.version>3.1.0</servlet-api.version> <spring.version>4.0.6.RELEASE</spring.version> ... - <artifactId>maven-compiler-plugin</artifactId> - <version>2.5.1</version> - <configuration> - <source>1.7</source> - <target>1.7</target> - </configuration> - </plugin> - <plugin> - <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-resources-plugin</artifactId> </plugin> + <plugin> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-maven-plugin</artifactId> + </plugin> </plugins> </build> <dependencies> + <!-- spring boot --> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-actuator</artifactId> + <exclusions> + <exclusion> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-logging</artifactId> + </exclusion> + </exclusions> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-log4j</artifactId> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-tomcat</artifactId> + <scope>provided</scope> + </dependency> + <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-web</artifactId> + </dependency> <!-- camel --> ... - <!-- upgrade camel-spring dependencies --> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-context</artifactId> - <version>${spring.version}</version> - </dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-aop</artifactId> - <version>${spring.version}</version> - </dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-tx</artifactId> - <version>${spring.version}</version> - </dependency> ... - <!-- logging --> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-api</artifactId> - <version>1.7.6</version> - </dependency> - <dependency> - <groupId>org.slf4j</groupId> - <artifactId>slf4j-log4j12</artifactId> - <version>1.7.6</version> - </dependency> - <dependency> - <groupId>log4j</groupId> - <artifactId>log4j</artifactId> - <version>1.2.17</version> - </dependency> - <!-- utilities --> <dependency> <groupId>joda-time</groupId> <artifactId>joda-time</artifactId> - <version>2.3</version> </dependency> <dependency> <groupId>commons-dbcp</groupId> <artifactId>commons-dbcp</artifactId> - <version>1.4</version> ... <!-- testing --> <dependency> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-test</artifactId> + <exclusions> + <exclusion> + <groupId>org.springframework.boot</groupId> + <artifactId>spring-boot-starter-logging</artifactId> + </exclusion> + </exclusions> + </dependency> + <dependency> - <dependency> - <groupId>org.springframework</groupId> - <artifactId>spring-test</artifactId> - <version>${spring.version}</version> - <scope>test</scope> - </dependency> - <dependency> - <groupId>org.mockito</groupId> - <artifactId>mockito-core</artifactId> - <version>1.9.5</version> - <scope>test</scope> - </dependency>

Next, I deleted the AppInitializer.java class I mentioned in Part II and added an Application.java class.

import org.apache.cxf.transport.servlet.CXFServlet; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.EnableAutoConfiguration; import org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration; import org.springframework.boot.autoconfigure.jdbc.DataSourceTransactionManagerAutoConfiguration; import org.springframework.boot.builder.SpringApplicationBuilder; import org.springframework.boot.context.embedded.ConfigurableEmbeddedServletContainer; import org.springframework.boot.context.embedded.EmbeddedServletContainerCustomizer; import org.springframework.boot.context.embedded.ErrorPage; import org.springframework.boot.context.embedded.ServletRegistrationBean; import org.springframework.boot.context.web.SpringBootServletInitializer; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.ComponentScan; import org.springframework.context.annotation.Configuration; import org.springframework.http.HttpStatus; @Configuration @EnableAutoConfiguration(exclude = {DataSourceAutoConfiguration.class, DataSourceTransactionManagerAutoConfiguration.class}) @ComponentScan public class Application extends SpringBootServletInitializer { public static void main(String[] args) { SpringApplication.run(Application.class, args); } @Override protected SpringApplicationBuilder configure(SpringApplicationBuilder application) { return application.sources(Application.class); } @Bean public ServletRegistrationBean servletRegistrationBean() { CXFServlet servlet = new CXFServlet(); return new ServletRegistrationBean(servlet, "/api/*"); } @Bean public EmbeddedServletContainerCustomizer containerCustomizer() { return new EmbeddedServletContainerCustomizer() { @Override public void customize(ConfigurableEmbeddedServletContainer container) { ErrorPage error401Page = new ErrorPage(HttpStatus.UNAUTHORIZED, "/401.html"); ErrorPage error404Page = new ErrorPage(HttpStatus.NOT_FOUND, "/404.html"); ErrorPage error500Page = new ErrorPage(HttpStatus.INTERNAL_SERVER_ERROR, "/500.html"); container.addErrorPages(error401Page, error404Page, error500Page); } }; } }

The error pages you see configured above were configured and copied from Tim Sporcic's Custom Error Pages with Spring Boot.

Dynamic DataSources

I excluded the DataSource-related AutoConfiguration classes because this application had many datasources. It also had a requirement to allow datasources to be added on-the-fly by simply editing application.properties. I asked how to do this on Stack Overflow and received an excellent answer from Stéphane Nicoll.

Spring Boot Issues

I did encounter a couple issues after integrating Spring Boot. The first was that it was removing the content-* headers for CXF responses. This only happened when running the WAR in Tomcat and I was able to figure out a workaround with a custom ResponseWrapper and Filter. This issue was fixed in Spring Boot 1.1.6.

The other issue was that the property override feature didn't seem to work when setting environment variables. The workaround was to create a setenv.sh script in $CATALINA_HOME/bin and add the environment variables there. See section 3.4 of Tomcat 7's RUNNING.txt for more information.

SOAP Faults

After upgrading to Spring 4 and integrating Spring Boot, I continued migrating IBM Message Broker flows. My goal was to make all new services backward compatible, but I ran into an issue. With the new services, SOAP Faults were sent back to the client instead of error messages in a SOAP Message. I suggested we fix it with one of two ways:

  1. Modify the client so it looks for SOAP Faults and handles them appropriately.
  2. Modify the new services so messages are returned instead of faults.

For #2, I learned how do to convert from a fault to messages on the Camel user mailing list. However, the team opted to improve the client and we added fault handling there instead.

Microservice Deployment

When I first integrated Spring Boot, I was planning on splitting our project into a project-per-service. This would allow each service to evolve on its own, instead of having a monolithic war that contains all the services. In team discussions, there was some concern about the memory overhead of running multiple instances instead of one.

I pointed out an interesting thread on the Camel mailing list about deploying routes with a route-per-jvm or all in the same JVM. The recommendation from that thread was to bundle similar routes together if you were to split them.

In the end, we decided to allow our Operations team decide how they wanted to manage/deploy everything. I mentioned that Spring Boot can work with Tomcat, Jetty, JBoss and even cloud providers like Heroku and Cloud Foundry. I estimated that splitting the project apart would take less than a day, as would making it back into a monolithic WAR.

Summary

This article explains how we upgraded our Apache Camel application to Spring 4 and integrated Spring Boot. There was a bit of pain getting things to work, but nothing a few pull requests and workarounds couldn't fix. We discovered some issues with setting environment variables for Tomcat and opted not to split our project into small microservices. Hopefully this article will help people trying to Camelize a Spring Boot application .

In the next article, I'll talk about load testing with Gatling, logging with Log4j2 and monitoring with hawtio and New Relic.

Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part II: Creating and Testing Routes

Wed, 2014-10-15 14:00

This article is the second in a series on Apache Camel and how I used it to replace IBM Message Broker for a client. The first article, Developing Services with Apache Camel - Part I: The Inspiration, describes why I chose Camel for this project.

To make sure these new services correctly replaced existing services, a 3-step approach was used:

  1. Write an integration test pointing to the old service.
  2. Write the implementation and a unit test to prove it works.
  3. Write an integration test pointing to the new service.

I chose to start by replacing the simplest service first. It was a SOAP Service that talked to a database to retrieve a value based on an input parameter. To learn more about Camel and how it works, I started by looking at the CXF Tomcat Example. I learned that Camel is used to provide routing of requests. Using its CXF component, it can easily produce SOAP web service endpoints. An end point is simply an interface, and Camel takes care of producing the implementation.

Legacy Integration Test

I started by writing a LegacyDrugServiceTests integration test for the old drug service. I tried two different ways of testing, using WSDL-generated Java classes, as well as using JAX-WS's SOAP API. Finding the WSDL for the legacy service was difficult because IBM Message Broker doesn't expose it when adding "?wsdl" to the service's URL. Instead, I had to dig through the project files until I found it. Then I used the cxf-codegen-plugin to generate the web service client. Below is what one of the tests looked like that uses the JAX-WS API.

@Test public void sendGPIRequestUsingSoapApi() throws Exception { SOAPElement bodyChildOne = getBody(message).addChildElement("gpiRequest", "m"); SOAPElement bodyChildTwo = bodyChildOne.addChildElement("args0", "m"); bodyChildTwo.addChildElement("NDC", "ax22").addTextNode("54561237201"); SOAPMessage reply = connection.call(message, getUrlWithTimeout(SERVICE_NAME)); if (reply != null) { Iterator itr = reply.getSOAPBody().getChildElements(); Map resultMap = TestUtils.getResults(itr); assertEquals("66100525123130", resultMap.get("GPI")); } } Implementing the Drug Service

In the last article, I mentioned I wanted no XML in the project. To facilitate this, I used Camel's Java DSL to define routes and Spring's JavaConfig to configure dependencies.

The first route I wrote was one that looked up a GPI (Generic Product Identifier) by NDC (National Drug Code).

@WebService public interface DrugService { @WebMethod(operationName = "gpiRequest") GpiResponse findGpiByNdc(GpiRequest request); }

To expose this as a web service endpoint with CXF, I needed to do two things:

  1. Tell Spring how to configure CXF by importing "classpath:META-INF/cxf/cxf.xml" into a @Configuration class.
  2. Configure CXF's Servlet so endpoints can be served up at a particular URL.

To satisfy item #1, I created a CamelConfig class that extends CamelConfiguration. This class allows Camel to be configured by Spring's JavaConfig. In it, I imported the CXF configuration, allowed tracing to be configured dynamically, and exposed my application.properties to Camel. I also set it up (with @ComponentScan) to look for Camel routes annotated with @Component.

@Configuration @ImportResource("classpath:META-INF/cxf/cxf.xml") @ComponentScan("com.raibledesigns.camel") public class CamelConfig extends CamelConfiguration { @Value("${logging.trace.enabled}") private Boolean tracingEnabled; @Override protected void setupCamelContext(CamelContext camelContext) throws Exception { PropertiesComponent pc = new PropertiesComponent(); pc.setLocation("classpath:application.properties"); camelContext.addComponent("properties", pc); // see if trace logging is turned on if (tracingEnabled) { camelContext.setTracing(true); } super.setupCamelContext(camelContext); } @Bean public Tracer camelTracer() { Tracer tracer = new Tracer(); tracer.setTraceExceptions(false); tracer.setTraceInterceptors(true); tracer.setLogName("com.raibledesigns.camel.trace"); return tracer; } }

CXF has a servlet that's responsible for serving up its services at common path. To map CXF's servlet, I leveraged Spring's WebApplicationInitializer in an AppInitializer class. I decided to serve up everything from a /api/* base URL.

package com.raibledesigns.camel.config; import org.apache.cxf.transport.servlet.CXFServlet; import org.springframework.web.WebApplicationInitializer; import org.springframework.web.context.ContextLoaderListener; import org.springframework.web.context.support.AnnotationConfigWebApplicationContext; import javax.servlet.ServletContext; import javax.servlet.ServletException; import javax.servlet.ServletRegistration; public class AppInitializer implements WebApplicationInitializer { @Override public void onStartup(ServletContext servletContext) throws ServletException { servletContext.addListener(new ContextLoaderListener(getContext())); ServletRegistration.Dynamic servlet = servletContext.addServlet("CXFServlet", new CXFServlet()); servlet.setLoadOnStartup(1); servlet.setAsyncSupported(true); servlet.addMapping("/api/*"); } private AnnotationConfigWebApplicationContext getContext() { AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext(); context.setConfigLocation("com.raibledesigns.camel.config"); return context; } }

To implement this web service with Camel, I created a DrugRoute class that extends Camel's RouteBuilder.

@Component public class DrugRoute extends RouteBuilder { private String uri = "cxf:/drugs?serviceClass=" + DrugService.class.getName(); @Override public void configure() throws Exception { from(uri) .recipientList(simple("direct:${header.operationName}")); from("direct:gpiRequest").routeId("gpiRequest") .process(new Processor() { public void process(Exchange exchange) throws Exception { // get the ndc from the input String ndc = exchange.getIn().getBody(GpiRequest.class).getNDC(); exchange.getOut().setBody(ndc); } }) .to("sql:{{sql.selectGpi}}") .to("log:output") .process(new Processor() { public void process(Exchange exchange) throws Exception { // get the gpi from the input List<HashMap> data = (ArrayList<HashMap>) exchange.getIn().getBody(); DrugInfo drug = new DrugInfo(); if (data.size() > 0) { drug = new DrugInfo(String.valueOf(data.get(0).get("GPI"))); } GpiResponse response = new GpiResponse(drug); exchange.getOut().setBody(response); } }); } }

The sql.selectGpi property is read from src/main/resources/application.properties and looks as follows:

sql.selectGpi=select GPI from drugs where ndc = #?dataSource=ds.drugs

The "ds.drugs" reference is to a datasource that's created by Spring. From my AppConfig class:

@Configuration @PropertySource("classpath:application.properties") public class AppConfig { @Value("${ds.driver.db2}") private String jdbcDriverDb2; @Value("${ds.password}") private String jdbcPassword; @Value("${ds.url}") private String jdbcUrl; @Value("${ds.username}") private String jdbcUsername; @Bean(name = "ds.drugs") public DataSource drugsDataSource() { return createDataSource(jdbcDriverDb2, jdbcUsername, jdbcPassword, jdbcUrl); } private BasicDataSource createDataSource(String driver, String username, String password, String url) { BasicDataSource ds = new BasicDataSource(); ds.setDriverClassName(driver); ds.setUsername(username); ds.setPassword(password); ds.setUrl(url); ds.setMaxActive(100); ds.setMaxWait(1000); ds.setPoolPreparedStatements(true); return ds; } } Unit Testing

The hardest part about unit testing this route was figuring out how to use Camel's testing support. I posted a question to the Camel users mailing list in early June. Based on advice received, I bought Camel in Action, read chapter 6 on testing and went to work. I wanted to eliminate the dependency on a datasource, so I used Camel's AdviceWith feature to modify my route and intercept the SQL call. This allowed me to return pre-defined results and verify everything worked.

@RunWith(CamelSpringJUnit4ClassRunner.class) @ContextConfiguration(loader = CamelSpringDelegatingTestContextLoader.class, classes = CamelConfig.class) @DirtiesContext(classMode = DirtiesContext.ClassMode.AFTER_EACH_TEST_METHOD) @UseAdviceWith public class DrugRouteTests { @Autowired CamelContext camelContext; @Produce ProducerTemplate template; @EndpointInject(uri = "mock:result") MockEndpoint result; static List<Map> results = new ArrayList<Map>() {{ add(new HashMap<String, String>() {{ put("GPI", "123456789"); }}); }}; @Before public void before() throws Exception { camelContext.setTracing(true); ModelCamelContext context = (ModelCamelContext) camelContext; RouteDefinition route = context.getRouteDefinition("gpiRequest"); route.adviceWith(context, new RouteBuilder() { @Override public void configure() throws Exception { interceptSendToEndpoint("sql:*").skipSendToOriginalEndpoint().process(new Processor() { @Override public void process(Exchange exchange) throws Exception { exchange.getOut().setBody(results); } }); } }); route.to(result); camelContext.start(); } @Test public void testMockSQLEndpoint() throws Exception { result.expectedMessageCount(1); GpiResponse expectedResult = new GpiResponse(new DrugInfo("123456789")); result.allMessages().body().contains(expectedResult); GpiRequest request = new GpiRequest(); request.setNDC("123"); template.sendBody("direct:gpiRequest", request); MockEndpoint.assertIsSatisfied(camelContext); } }

I found AdviceWith to be extremely useful as I developed more routes and tests in this project. I used its weaveById feature to intercept calls to stored procedures, replace steps in my routes and remove steps I didn't want to test. For example, in one route, there was a complicated workflow to interact with a customer's data.

  1. Call a stored procedure in a remote database, which then inserts a record into a temp table.
  2. Lookup that data using the value returned from the stored procedure.
  3. Delete the record from the temp table.
  4. Parse the data (as CSV) since the returned value is ~ delimited.
  5. Convert the parsed data into objects, then do database inserts in a local database (if data doesn't exist).

To make matters worse, remote database access was restricted by IP address. This meant that, while developing, I couldn't even manually test from my local machine. To solve this, I used the following:

  • interceptSendToEndpoint("bean:*") to intercept the call to my stored procedure bean.
  • weaveById("myJdbcProcessor").before() to replace the temp table lookup with a CSV file.
  • Mockito to mock a JdbcTemplate that does the inserts.

To figure out how to configure and execute stored procedures in a route, I used the camel-store-procedure project on GitHub. Mockito's ArgumentCaptor also became very useful when developing a route that called a 3rd-party web service within a route. James Carr has more information on how you might use this to verify values on an argument.

To see if my tests were hitting all aspects of the code, I integrated the cobertura-maven-plugin for code coverage reports (generated by running mvn site).

<build> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <configuration> <instrumentation> <excludes> <exclude>**/model/*.class</exclude> <exclude>**/AppInitializer.class</exclude> <exclude>**/StoredProcedureBean.class</exclude> <exclude>**/SoapActionInterceptor.class</exclude> </excludes> </instrumentation> <check/> </configuration> <version>2.6</version> </plugin> ... <reporting> <plugins> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>cobertura-maven-plugin</artifactId> <version>2.6</version> </plugin> Integration Testing

Writing an integration test was fairly straightforward. I created a DrugRouteITest class, a client using CXF's JaxWsProxyFactoryBean and called the method on the service.

public class DrugRouteITest { private static final String URL = "http://localhost:8080/api/drugs"; protected static DrugService createCXFClient() { JaxWsProxyFactoryBean factory = new JaxWsProxyFactoryBean(); factory.setBindingId("http://schemas.xmlsoap.org/wsdl/soap12/"); factory.setServiceClass(DrugService.class); factory.setAddress(getTestUrl(URL)); return (DrugService) factory.create(); } @Test public void findGpiByNdc() throws Exception { // create input parameter GpiRequest input = new GpiRequest(); input.setNDC("54561237201"); // create the webservice client and send the request DrugService client = createCXFClient(); GpiResponse response = client.findGpiByNdc(input); assertEquals("66100525123130", response.getDrugInfo().getGPI()); } }

This integration test is only run after Tomcat has started and deployed the app. Unit tests are run by Maven's surefire-plugin, while integration tests are run by the failsafe-plugin. An available Tomcat port is determined by the build-helper-maven-plugin. This port is set as a system property and read by the getTestUrl() method call above.

public static String getTestUrl(String url) { if (System.getProperty("tomcat.http.port") != null) { url = url.replace("8080", System.getProperty("tomcat.http.port")); } return url; }

Below are the relevant bits from pom.xml that determines when to start/stop Tomcat, as well as which tests to run.

<plugin> <groupId>org.apache.tomcat.maven</groupId> <artifactId>tomcat7-maven-plugin</artifactId> <version>2.2</version> <configuration> <path>/</path> </configuration> <executions> <execution> <id>start-tomcat</id> <phase>pre-integration-test</phase> <goals> <goal>run</goal> </goals> <configuration> <fork>true</fork> <port>${tomcat.http.port}</port> </configuration> </execution> <execution> <id>stop-tomcat</id> <phase>post-integration-test</phase> <goals> <goal>shutdown</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>2.17</version> <configuration> <excludes> <exclude>**/*IT*.java</exclude> <exclude>**/Legacy**.java</exclude> </excludes> <includes> <include>**/*Tests.java</include> <include>**/*Test.java</include> </includes> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-failsafe-plugin</artifactId> <version>2.17</version> <configuration> <includes> <include>**/*IT*.java</include> </includes> <systemProperties> <tomcat.http.port>${tomcat.http.port}</tomcat.http.port> </systemProperties> </configuration> <executions> <execution> <goals> <goal>integration-test</goal> <goal>verify</goal> </goals> </execution> </executions> </plugin>

The most useful part of integration testing came when I copied one of my legacy tests into it and started verifying backwards compatibility. Since we wanted to replace existing services, and require no client changes, I had to make the XML request and response match. Charles was very useful for this exercise, letting me inspect the request/response and tweak things to match. The following JAX-WS annotations allowed me to change the XML element names and achieve backward compatibility.

  • @BindingType(SOAPBinding.SOAP12HTTP_BINDING)
  • @WebResult(name = "return", targetNamespace = "...")
  • @ResponseWrapper(localName = "gpiResponse")
  • @WebParam(name = "args0", targetNamespace = "...")
  • @XmlElement(name = "...")
Continuous Integration and Deployment

My next item of business was configuring a job in Jenkins to continually test and deploy. Getting all the tests to pass was easy, and deploying to Tomcat was simple enough thanks to the Deploy Plugin and this article. However, after a few deploys, Tomcat would throw OutOfMemory exceptions. Therefore, I ended up creating a second "deploy" job that stops Tomcat, copies the successfully-built WAR to $CATALINA_HOME/webapps, removes $CATALINA_HOME/webapps/ROOT and restarts Tomcat. I used Jenkins "Execute shell" feature to configure these three steps. I was pleased to find my /etc/init.d/tomcat script still worked for starting Tomcat at boot time and providing convenient start/stop commands.

Summary

This article shows you how I implemented and tested a simple Apache Camel route. The route described only does a simple database lookup, but you can see how Camel's testing support allows you to mock results and concentrate on developing your route logic. I found its testing framework very useful and not well documented, so hopefully this article helps to fix that. In the next article, I'll talk about upgrading to Spring 4, integrating Spring Boot and our team's microservice deployment discussions.

Categories: FLOSS Project Planets

Sergey Beryozkin: CXF becomes friends with Tika and Lucene

Wed, 2014-10-15 04:59
You may have been thinking for a while: would it actually be cool to get some experience with Apache Lucene and Apache Tika and enhance the JAX-RS services you work upon along the way ? Lucene and Tika are those cool projects people are talking about but as it happens there has never been an opportunity to use them in your project...

Apache Lucene is a well known project where its community keeps innovating with improving and optimizing the capabilities of various text analyzers. Apache Tika is a cool project which can be used to get the metadata and content out of binary resources with formats such as PDF, ODT, etc, with lots of other formats being supported. As a side note, Apache Tika is not only a cool project, it is also a very democratic project where everyone is welcomed from the get go - the perfect project to start your Apache career if you think of starting involved into one of the Apache projects.

Now, a number of services you have written may be supporting uploads of the binary resources, for example, you may have a JAX-RS server accepting multipart/form-data uploads.

As it happens, Lucene plus Tika is what one needs to be able to analyze the binary content easily and effectively. Tika would give you the metadata and the content, Lucene will tokenize it and help search over it. As such you can let your users search and download only those PDF or other binary resources which match the search query. It is something your users will appreciate.

CXF 3.1.0 which is under the active development offers a utility support for working with Tika and Lucene. Andriy Redko worked on improving the integration with Lucene and introducing a content extraction support with the help of  Tika. It is all shown in a nice jax_rs/search demo which offers a Bootstrap UI for uploading, searching and downloading of PDF and ODT files. The demo will be shipped in the CXF distribution.  

Please start experimenting today with the demo (download CXF 3.1.0-SNAPSHOT distribution), let us know what you think, and get your JAX-RS project to the next level.

You are also encouraged to experiment with Apache Solr which offers an  advanced search engine on top of Lucene, with Tika also being utilized.

Enjoy!      






Categories: FLOSS Project Planets

Heshan Suriyaarachchi: Stackmap frame errors when building the aspectj project with Java 1.7

Wed, 2014-10-15 01:15

I had a project which used aspectj and it was building fine with Java 1.6. When I updated it to Java 1.7 I saw the following error.

[INFO] Molva the Destroyer Aspects ....................... FAILURE [2.324s]
[INFO] Molva The Destroyer Client ........................ SKIPPED
[INFO] Molva The Destroyer Parent ........................ SKIPPED
[INFO] Molva The Destroyer Distribution .................. SKIPPED
[INFO] ------------------------------------------------------------------------
[INFO] BUILD FAILURE
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 2.424s
[INFO] Finished at: Tue Oct 14 11:16:19 PDT 2014
[INFO] Final Memory: 12M/310M
[INFO] ------------------------------------------------------------------------
[ERROR] Failed to execute goal org.codehaus.mojo:exec-maven-plugin:1.1:java (default) on project molva-the-destroyer-aspects: An exception occured while executing the Java class. Expecting a stackmap frame at branch target 30
[ERROR] Exception Details:
[ERROR] Location:
[ERROR] com/concur/puma/molva/aspects/TestTarget.main([Ljava/lang/String;)V @12: invokestatic
[ERROR] Reason:
[ERROR] Expected stackmap frame at this location.
[ERROR] Bytecode:
[ERROR] 0000000: 2a4d b200 5e01 012c b800 644e b800 c62d
[ERROR] 0000010: b600 ca2c 2db8 00bb 2db8 00bf 57b1 3a04
[ERROR] 0000020: b800 c62d 1904 b600 ce19 04bf
[ERROR] Exception Handler Table:
[ERROR] bci [12, 30] => handler: 30
[ERROR] -> [Help 1]
[ERROR]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR]
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
My maven configuration looked like below.
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<version>1.6.5</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.17</version>
</dependency>
<dependency>
<groupId>org.perf4j</groupId>
<artifactId>perf4j</artifactId>
<version>0.9.16</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.2</version>
<configuration>
<source>1.7</source>
<target>1.7</target>
</configuration>
<executions>
<execution>
<goals>
<goal>compile</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>exec-maven-plugin</artifactId>
<version>1.1</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>java</goal>
</goals>
</execution>
</executions>
<configuration>
<mainClass>com.concur.puma.molva.aspects.TestTarget</mainClass>
</configuration>
</plugin>
</plugins>
</build>

Fix The default compliance level for aspectj-maven-plugin is 1.4 according to http://mojo.codehaus.org/aspectj-maven-plugin/compile-mojo.html#complianceLevel. Since I did not have that tag specified, the build was using the default value. Once I inserted the tag into the configuration, the build was successful.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-14

Tue, 2014-10-14 18:58
  • Dublin’s Best-Kept Secret: Blas Cafe

    looks great, around the corner from Cineworld on King’s Inn St, D1

    (tags: dublin cafes food blas-cafe eating northside)

  • “Meta-Perceptual Helmets For The Dead Zoo”

    with Neil McKenzie, Nov 9-16 2014, in the National History Museum in Dublin: ‘These six helmets/viewing devices start off by exploring physical conditions of viewing: if we have two eyes, they why is our vision so limited? Why do we have so little perception of depth? Why don’t our two eyes offer us two different, complementary views of the world around us? Why can’t they extend from our body so we can see over or around things? Why don’t they allow us to look behind and in front at the same time, or sideways in both directions? Why can’t our two eyes simultaneously focus on two different tasks? Looking through Michael Land’s defining work Animal Eyes, we see that nature has indeed explored all of these possibilities: a Hammerhead Shark has hyper-stereo vision; a horse sees 350° around itself; a chameleon has separately rotatable eyes… The series of Meta-Perceptual Helmets do indeed explore these zoological typologies: proposing to humans the hyper-stereo vision of the hammerhead shark; or the wide peripheral vision of the horse; or the backward/forward vision of the chameleon… but they also take us into the unnatural world of mythology and literature: the Cheshire Cat Helmet is so called because of the strange lingering effect of dominating visual information such as a smile or the eyes; the Cyclops allows one large central eye to take in the world around while a second tiny hidden eye focuses on a close up task (why has the creature never evolved that can focus on denitting without constantly having to glance around?).’ (via Emma)

    (tags: perception helmets dublin ireland museums dead-zoo sharks eyes vision art)

  • Grade inflation figures from Irish universities

    The figures show that, between 2004 and 2013, an average of 71.7 per cent of students at TCD graduated with either a 1st or a 2.1. DCU and UCC had the next highest rate of such awards (64.3 per cent and 64.2 per cent respectively), followed by UCD (55.8 per cent), NUI Galway (54.7 per cent), Maynooth University (53.7 per cent) and University of Limerick (50.2 per cent).

    (tags: tcd grades grade-inflation dcu ucc ucd ireland studies academia third-level)

  • webrtcH4cKS: ~ coTURN: the open-source multi-tenant TURN/STUN server you were looking for

    Last year we interviewed Oleg Moskalenko and presented the rfc5766-turn-server project, which is a free open source and extremely popular implementation of TURN and STURN server. A few months later we even discovered Amazon is using this project to power its Mayday service. Since then, a number of features beyond the original RFC 5766 have been defined at the IETF and a new open-source project was born: the coTURN project.

    (tags: webrtc turn sturn rfc-5766 push nat stun firewalls voip servers internet)

  • Google Online Security Blog: This POODLE bites: exploiting the SSL 3.0 fallback

    Today we are publishing details of a vulnerability in the design of SSL version 3.0. This vulnerability allows the plaintext of secure connections to be calculated by a network attacker. ouch.

    (tags: ssl3 ssl tls security exploits google crypto)

Categories: FLOSS Project Planets

Heshan Suriyaarachchi: Compile aspectj project containing Java 1.7 Source

Tue, 2014-10-14 16:47

Following maven configuration let’s you compile a project with Java 1.7 source.
<dependencies>
<dependency>
<groupId>org.aspectj</groupId>
<artifactId>aspectjrt</artifactId>
<!--<version>1.6.5</version>-->
<version>1.8.2</version>
</dependency>
</dependencies>

<build>
<plugins>
<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>aspectj-maven-plugin</artifactId>
<version>1.7</version>
<configuration>
<complianceLevel>1.7</complianceLevel>
<source>1.7</source>
<target>1.7</target>
</configuration>
<executions>
<execution>
<!--<phase>process-sources</phase>-->
<goals>
<goal>compile</goal>
<goal>test-compile</goal>
</goals>
</execution>
</executions>
</plugin>
</plugins>
</build>
Categories: FLOSS Project Planets

Chris Hostetter: Stump the Chump is Coming to D.C.!

Tue, 2014-10-14 16:38

In just under a month, Lucene/Solr Revolution will be coming to Washington D.C. — and once again, I’ll be in the hot seat for Stump The Chump.

If you are not familiar with “Stump the Chump” it’s a Q&A style session where “The Chump” (That’s Me!) is put on the spot with tough, challenging, unusual questions about Lucene & Solr — live, on stage, in front of hundreds of rambunctious convention goers, with judges who have all seen and thought about the questions in advance and get to mock The Chump (still me) and award prizes to people whose questions do the best job of “Stumping The Chump”.

People frequently tell me it’s the most fun they’ve ever had at a Tech Conference — You can judge for yourself by checking out the videos from last years events: Lucene/Solr Revolution 2013 in Dublin, and Lucene/Solr Revolution 2013 in San Diego.

I’ll be posting more details in the weeks ahead, but until then you can subscribe to this blog (or just the “Chump” tag) to stay informed.

And if you haven’t registered for Lucene/Solr Revolution yet, what are you waiting for?!?!

The post Stump the Chump is Coming to D.C.! appeared first on Lucidworks.

Categories: FLOSS Project Planets

Justin Mason: Elsewhere….

Tue, 2014-10-14 11:14

It’s been a while since I wrote a long-form blog post here, but this post on the Swrve Engineering blog is worth a read; it describes how we use SSD caching on our EC2 instances to greatly improve EBS throughput.

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Using JAAS with Apache CXF

Tue, 2014-10-14 08:14
Apache CXF supports a wide range of tokens for authentication (SAML, UsernameTokens, Kerberos, etc.), and also offers different ways of authenticating these tokens. A standard way of authenticating a received token is to use a JAAS LoginModule. This article will cover some of the different ways you can configure JAAS in CXF, and some of the JAAS LoginModules that are available.

1) Configuring JAAS in Apache CXF

There are a number of different ways to configure your CXF web service to authenticate tokens via JAAS. For all approaches, you must define the System property "java.security.auth.login.config" to point towards your JAAS configuration file.

1.1) JAASLoginInterceptor

CXF provides a interceptor called the JAASLoginInterceptor that can be added either to the "inInterceptor" chain of an endpoint (JAX-WS or JAX-RS) or a CXF bus (so that it applies to all endpoints). The JAASLoginInterceptor typically authenticates a Username/Password credential (such as a WS-Security UsernameToken or HTTP/BA) via JAAS. Note that for WS-Security, you must tell WSS4J not to authenticate the UsernameToken itself, but just to process it and store it for later authentication via the JAASLoginInterceptor. This is done by setting the JAX-WS property "ws-security.validate.token" to "false".

At a minimum it is necessary to set the "contextName" attribute of the JAASLoginInterceptor, which references the JAAS Context Name to use. It is also possible to define how to retrieve roles as part of the authentication process, by default CXF assumes that javax.security.acl.Group Objects are interpreted as "role" Principals. See the CXF wiki for more information on how to configure the JAASLoginInterceptor. After successful authentication, a CXF SecurityContext Object is created with the name and roles of the authenticated principal.

1.2) JAASAuthenticationFeature

Newer versions of CXF also have a CXF Feature called the JAASAuthenticationFeature. This simply wraps the JAASLoginInterceptor with default configuration for Apache Karaf. If you are deploying a CXF endpoint in Karaf, you can just add this Feature to your endpoint or Bus without any additional information, and CXF will authenticate the received credential to whatever Login Modules have been configured for the "karaf" realm in Apache Karaf.

1.3) JAASUsernameTokenValidator

As stated above, it is possible to validate a WS-Security UsernameToken in CXF via the JAASLoginInterceptor or the JAASAuthenticationFeature by first setting the JAX-WS property "ws-security.validate.token" to "false". This tells WSS4J to avoid validating UsernameTokens. However it is possible to also validate UsernameTokens using JAAS directly in WSS4J via the JAASUsernameTokenValidator. You can configure this validator when using WS-SecurityPolicy via the JAX-WS property "ws-security.ut.validator".

2) Using JAAS LoginModules in Apache CXF

Once you have decided how you are going to configure JAAS in Apache CXF, it is time to pick a JAAS LoginModule that is appropriate for your authentication requirements. Here are some examples of LoginModules you can use.

2.1) Validating a Username + Password to LDAP / Active Directory

For validating a Username + Password to an LDAP / Active Directory backend, use one of the following login modules:
  • com.sun.security.auth.module.LdapLoginModule: Example here (context name "sun").
  • org.eclipse.jetty.plus.jaas.spi.LdapLoginModule: Example here (context name "jetty"). Available via the org.eclipse.jetty/jetty-plus dependency. This login module is useful as it's easy to retrieve roles associated with the authenticated user.
2.2) Validating a Kerberos token

Kerberos tokens can be validated via:
  • com.sun.security.auth.module.Krb5LoginModule: Example here.
2.3) Apache Karaf specific LoginModules

Apache Karaf contains some LoginModules that can be used when deploying your application in Karaf:
  • org.apache.karaf.jaas.modules.properties.PropertiesLoginModule: Authenticates Username + Passwords and retrieves roles via "etc/users.properties".
  • org.apache.karaf.jaas.modules.properties.PublickeyLoginModule: Authenticates SSH keys and retrieves roles via "etc/keys.properties".
  • org.apache.karaf.jaas.modules.properties.OsgiConfigLoginModule: Authenticates Username + Passwords and retrieves roles via the OSGi Config Admin service.
  • org.apache.karaf.jaas.modules.properties.LDAPLoginModule: Authenticates Username + Passwords and retrieves roles from an LDAP backend.
  • org.apache.karaf.jaas.modules.properties.JDBCLoginModule:  Authenticates Username + Passwords and retrieves roles from a database.
  • org.apache.karaf.jaas.modules.properties.SyncopeLoginModule: Authenticates Username + Passwords and retrieves roles via the Apache Syncope IdM.
See Jean-Baptiste Onofré's excellent blog for a description of how to set up and test the SyncopeLoginModule. Note that it is also possible to use this LoginModule in other containers, see here for an example.
Categories: FLOSS Project Planets

Carlos Sanchez: Continuous Discussion panel about Agile, Continuous Delivery, DevOps

Tue, 2014-10-14 03:48

Last week I participated as a panelist in the Continuous Discussions talk hosted by Electric Cloud, and the recording is now available. A bit long but there are some good points in there.

Some excerpts from twitter

@csanchez: “How fast can your tests absorb your debs agility” < and your Ops, and your Infra?

@cobiacomm: @orfjackal says ‘hard to do agile when the customer plan is to release once per year’

@sunandaj17: It’s not just about the tools: is a matter of team policies & conventions) & it relies on more than 1 kind of tool

@eriksencosta: “You can’t outsource Agile”.

@cobiacomm: biggest agile obstacles -> long regression testing cycles, unclear dependencies, and rebuilding the wheel

The panelists:

Andrew Rivers – blog.andrewrivers.co.uk
Carlos Sanchez – @csanchez   |  http://csanchez.org
Chris Haddad – @cobiacomm
Dave Josephsen – @djosephsen
Eriksen Costa – @eriksencosta  |  blog.eriksen.com.br
Esko Luontola – @orfjackal  |  www.orfjackal.net
John Ryding – @strife25  |  blog.johnryding.com
Norm MacLennan – @nromdotcom  |  blog.normmaclennan.com
J. Randall Hunt – @jrhunt  |  blog.ranman.org
Sriram Narayan – @sriramnarayan  |  www.sriramnarayan.com
Sunanda Jayanth – @sunandaj17  |  http://blog.qruizelabs.com/

Hosts: Sam Fell (@samueldfell) and Anders Wallgren (@anders_wallgren) from Electric Cloud.

http://electric-cloud.com/blog/2014/10/c9d9-continuous-discussions-episode-1-recap/


Categories: FLOSS Project Planets

Mark Miller: No public posts available.

Mon, 2014-10-13 23:05
Mark Miller does not share any public posts. Visit the Google+-Page
Categories: FLOSS Project Planets

Matt Raible: The 21-Day Sugar Detox

Mon, 2014-10-13 22:15

For the past 21-days, I've been on a sugar detox. Becky Reece, a long-time friend of Trish's, inspired us to do it. Becky is a nutritionist and we've always admired how fit she is. Becky challenged a bunch of her friends to do it, and Trish signed up. I told Trish I'd do it with her to make things easier from a cooking perspective.

To be honest, we really didn't know what we were getting into when we started it. Trish ordered the book the week before we started and it arrived a couple days before things kicked off. Trish started reading the book the night before we started. That's when we realized we should've prepared more. The book had all kinds of things you were supposed to do the week before you started the detox. Most things involved shopping and cooking, so you were prepared with pre-made snacks and weren't too stressed out.

We started the detox on Monday, September 22, 2014. That's when we first realized there was no alcohol (we both love craft beer). Trish shopped and cooked like a madwoman that first week. I think we spent somewhere around $600 on groceries. Trish wrote about our first week on her blog.

We are on Sunday Day-7 and made it through the first week with two birthday parties and a nice dinner out eating well and staying on track. I'm not weighing myself until the end, but my face looks a little slimmer, my skin feels smoother and my wedding ring is not as tight as it used to be. I feel great and have started to believe this is the last detox, diet or cleanse I will ever need. Cleansing my life of sugar could be a life changer especially when an Avo-Coconana Smoothie with Almond Butter Pad Thai becomes my new favorite meal.

What she didn't mention is what we discovered shortly after. She'd printed out a list of “Yes” and “No” foods from the book. But, she printed out the hardest level! We'd been doing level 3 for a week! There are three different levels suggested in the book:

Level 1: ½ cup whole grains, full fat dairy and all the meat and vegetables you want
Level 2: No grains, full fat dairy and all the meat and vegetables you want
Level 3: No grains, no dairy, just meat, nuts and veggies

All levels allow fruit: unlimited limes and lemons, but only one green apple, green-tipped banana or grapefruit per day.

Figuring “we made it this far”, we decided to continue with the hardest level. Unlike Trish, I did weigh myself and was pumped to find I'd lost 5 lbs (2.3 kg) in the first week. I was trying to exercise everyday as well, by riding my bike, running, hiking and walking. Nothing strenuous, just something to get the blood pumping.

I did notice during the second week that I'd get really tired when exercising. I'd hit a wall after about 30-40 minutes were I felt like I'd lost all my energy. I'd felt this before when I was out of shape, but I didn't think I was that out of shape when we'd started.

The second week, our kids were at their Mom's house, so we ate out a bit more and socialized with friends. Getting through happy hours wasn't too hard, as long as we mentioned we were on a sugar detox up front. I don't have much of a sweet tooth, so I never had any chocolate cravings. I also rarely drink soda (except with Whiskey), so I didn't really miss much from the sweet side.

What I did miss was sugar in my coffee. Black coffee still makes my face wrinkle after three weeks and I finally switched to Green Tea during the last week.

During the second week, I noticed my weight loss plateaued. I think this is why they don't want you to weigh yourself - so you don't get discouraged. Even though the pounds stopped dropping, I did notice my pants were a lot looser around the waist.

We watched the movie Fed Up for extra motivation at the end of the second week. I thought it was enlightening to learn that when they take out fat from food, they often add sugar to give it back flavor. The sugar content difference between a diet and regular Coke? None - they both have 3.5 teaspoons.

They mentioned that most kids have their recommended daily allowance of sugar before they leave the house in the morning (in their bowl of cereal and milk). Apparently, adding sugar all started in the late 70s / early 80s when dieting became a fad. The USDA (or someone similar) recommended they warn Americans about sugar, but they chose to strike that from the record and warm them about fat instead. In the last couple years, they've discovered that fat isn't that bad and sugar is likely the cause of our country's obesity problem. They also mentioned that there's a lot of folks that are skinny, but fat on the inside.

The third week was the hardest one. This was mainly because Trish traveled out of town for work and I became a single parent with a mean cooking habit. I was amazed at the amount of dishes I went through, and I was only cooking breakfast and dinner for three. I must've spent almost two hours in the kitchen each day.

This final week is when I realized the sugar detox wasn't a sustainable long term practice. On Wednesday of that week, I rode from my kids school to my office in LoDo. It's a 15-mile ride and took me just over an hour. I felt fine while riding, but once I arrived, I felt sick. I was able to work, drink water and take deep breaths for a couple hours to feel better. However, I ended up taking a nap before noon to shake the weakness I felt. That night, it took me a bit longer to ride back, because it was uphill and I stopped a few times to rest. I loaded up on a green-tipped banana before I left, as well as a couple handfuls of almonds. The same sickness hit me after riding and I almost threw up an hour after the ride. After that experience, I decided not to push myself when exercising. Not until I was off the sugar detox anyway.

The next day I encountered a phenomenon I haven't had in years. I had to roll my pant legs up because my pants kept falling down and dragging on the ground. I also experienced a few headaches this week, something I rarely get.

In fact, as I'm writing this (on day 22), I still have a headache I woke up this morning with. This could be caused by other stressors in my life, for example, looking for a new gig and realizing how much I've spent on my VW Bus project.

As far as feeling good and skin complexion - two things the book said you'd noticed - I haven't noticed them that much. I didn't feel terrible before, but I was annoyed by how tight my pants were. More energy? Nope, I feel like I had more before. During the detox, I didn't get the 2:30 doldrums, but I don't recall feeling them much before either. I did notice that I was never famished when on the detox. If my stomach growled, it was merely an indicator that I should eat in the next hour or two. But I didn't feel like I needed to eat right away. There were some evenings where I was very hungry, but we often had snacks (nuts and jerky) to make the hunger go away.

I weighed myself on the morning of day 21 and I was down 12 pounds (5.4 kg). I weighed myself this morning (day 22) and I'm down 15 pounds (6.8 kg)! Who knew you could lose 3 pounds in 24 hours?!

I realize the biggest problems with these crash diets is keeping the weight off. However, the detox is also designed to change one's taste buds and I think it succeeded there. I'm much more aware of sugars in our food supply and I plan to eat a lot less of it. I hope to keep losing weight, but I'm only aiming for a pound a week for the next couple months.

My plan is to add back in full fat dairy, keep exercising and eat sugar-free meals more often than not. I'm also going to steer clear of any low-fat foods, as they often add sugar to make up for the fat flavor loss. With any luck, I'll be in great shape for this year's ski season!

Many thanks to Becky for inspiring Trish and to Trish for asking me to do it with her. I needed it a lot more than she did.

Categories: FLOSS Project Planets

Mark Miller: No public posts available.

Mon, 2014-10-13 21:05
Mark Miller does not share any public posts. Visit the Google+-Page
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-13

Mon, 2014-10-13 18:58
Categories: FLOSS Project Planets

Nick Kew: To phish, or not to phish?

Mon, 2014-10-13 03:20

I recently had email telling me my password for $company VPN is due to expire, and directing me to a URL to update it.

Legitimate or phishing?  Let’s examine it.

It follows the exact form of similar legitimate emails I’ve had before.  Password expires in 14 days.  Daily updates decrementing the day count until I change it.  So far so good.

However, it’s directing me to an unfamiliar URL: https://$company.okta.com/.   Big red flag!  But $company outsources a range of admin functions in this manner, so it’s entirely plausible.

It appears to come from a legitimate source.  But since all $company email is outsourced to gmail, the information I can glean from the headers is limited.  How much trust can I place in gmail’s SPF telling me the sender is valid?

A look on $company’s intranet fails to find anything relevant (though in the absence of a search function I probably wouldn’t find it anyway without a truly gruelling trawl).  OK, let’s google for evidence of a legitimate connection between $company and okta.com.  I’ve resolved similar problems to my own satisfaction that way before both for $company and other such situations (e.g. here or here), but the hurdle for a $company-VPN password – even one I’m about to change – has to be high.

Googling finds me only inconclusive evidence.  There’s a linkedin page for $company’s sysop, only it turns out he’s moved on and the linkedin page is just listing both $company and okta skills in his CV.  There’s a PDF at $company’s website with instructions for setting up some okta product (though it’s one of those that insults you with big cuddly pictures of selecting a series of menu options without actually saying anything non-obvious).

Hmmm …

OK, maybe I can get okta.com to prove itself, with the kind of security question your bank asks when you ‘phone it.  Let’s use okta’s “Password Reset”.  I expect it’ll send a one-off token I can use to set a new password.  If legit, that’ll work; if not then the newly-minted password is worthless and I just abandon it.  But no such thing: instead of sending me such a token, it tells (emails) me:

Your Okta account is configured to use the same password you currently use for logging in to your organization’s Windows network. Use your Windows account password to sign in to Okta. Please use the password reset function in Windows to reset your password.

Well, b***er that.  Windows account password?  Windows network?  I have no such thing, and neither does $company expect me to.  I expect $company may have a few windows boxes, but they’re certainly not the norm.  No doubt it just means the LDAP password I’m supposed to be changing, but if I know that then why should I be asking it for password reset?  Bah, Humbug!

One more thing to try before a humiliating request for help over something I should be able to deal with myself.  Somewhere in my gmail I can dig up previous password reset reminders, with a URL somewhere on $company’s own intranet.  Try that URL.  Yes, it still works, and I can reset my VPN password there.  All that investigation for … what?

Well, there’s a value to it.  Namely the acid test: does the daily password reminder stop after I’ve reset the password?  If it’s genuine then it shares information with $intranet and knows I’ve reset my password.  If it’s a phish then it knows nothing.  So now I’m getting some real evidence: if the password reminders stop then it’s genuine.

They do stop.  So I conclude it is indeed genuine.

Unless it’s so ultra-sophisticated that it’s been warned off by my having visited the site and used password reset, albeit unsuccessfully.  Waiting to try again in a few months?  Hmmm ….

Well, if $company hasn’t outsourced it then the intranet-based password reset will continue to work next time.  If it doesn’t work next time then there’s one more piece of evidence it’s genuine.


Categories: FLOSS Project Planets

Bryan Pendleton: Link Clearance

Sun, 2014-10-12 22:53

Man it was hot today. Doesn't fall mean that it starts to cool down?

  • The Physics of Doing an Ollie on a Skateboard, or, the Science of Why I Can’t SkateSo here’s a thought – maybe I can use physics to learn how to do an ollie. Here’s the plan. I’m going to open up the above video of skateboarder Adam Shomsky doing an ollie, filmed in glorious 1000 frames-per-second slow motion, and analyze it in the open source physics video analysis tool Tracker.
  • 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI '14).As part of our commitment to open access, the proceedings from the Symposium are now free and openly accessible via the technical sessions Web page.
  • The Horror of a 'Secure Golden Key'A “golden key” is just another, more pleasant, word for a backdoor—something that allows people access to your data without going through you directly. This backdoor would, by design, allow Apple and Google to view your password-protected files if they received a subpoena or some other government directive. You'd pick your own password for when you needed your data, but the companies would also get one, of their choosing. With it, they could open any of your docs: your photos, your messages, your diary, whatever.
  • Malware needs to know if it's in the MatrixA presentation from UCSM's professor Giovanni Vigna (who runs the Center for CyberSecurity and Seclab), he's seeing more and more malware that keeps its head down on new infection sites, cautiously probing the operating system to try and determine if it's running on a real computer or if it's a head in a jar, deploying all kinds of tricks to get there.
  • 44 engineering management lessons30. Most conflict happens because people don’t feel heard. Sit down with each person and ask them how they feel. Listen carefully. Then ask again. And again. Then summarize what they said back to them. Most of the time that will solve the problem.
  • Unlocked 10Gbps TX wirespeed smallest packet single coreThe single core 14.8Mpps performance number is an artificial benchmark performed with pktgen, which besides spinning the same packet (skb), now also notifies the NIC hardware after populating it's TX ring buffer with a "burst" of packets.
  • Redis cluster, no longer vaporware.The consistency model is the famous “eventual consistency” model. Basically if nodes get desynchronized because of partitions, it is guaranteed that when the partition heals, all the nodes serving a given key will agree about its value.

    However the merge strategy is “last failover wins”, so writes received during network partitions can be lost. A common example is what happens if a master is partitioned into a minority partition with clients trying to write to it. If when the partition heals, in the majority side of the partition a slave was promoted to replace this master, the writes received by the old master are lost.

  • Using Git HunksMany of the git subcommands can be passed --patch or -p for short. When used with git add, we can compose a commit with exactly the changes we want, instead of just adding whole files. Once you hit enter, you get an interactive prompt where you're presented with a diff and a set of options.
  • Slasher Ghost, and Other Developments in Proof of StakeThe fundamental problem that consensus protocols try to solve is that of creating a mechanism for growing a blockchain over time in a decentralized way that cannot easily be subverted by attackers. If a blockchain does not use a consensus protocol to regulate block creation, and simply allows anyone to add a block at any time, then an attacker or botnet with very many IP addresses could flood the network with blocks, and particularly they can use their power to perform double-spend attacks – sending a payment for a product, waiting for the payment to be confirmed in the blockchain, and then starting their own “fork” of the blockchain, substituting the payment that they made earlier with a payment to a different account controlled by themselves, and growing it longer than the original so everyone accepts this new blockchain without the payment as truth.
  • Economies of Scale in Peer-to-Peer Networks I've been working on P2P technology for more than 16 years, and although I believe it can be very useful in some specific cases, I'm far less enthusiastic about its potential to take over the Internet.

    Below the fold I look at some of the fundamental problems standing in the way of a P2P revolution, and in particular at the issue of economies of scale.

  • A Scalability RoadmapYou might be surprised that old blocks aren’t needed to validate new transactions. Pieter Wuille re-architected Bitcoin Core a few releases ago so that all of the data needed to validate transactions is kept in a “UTXO” (unspent transaction output) database. The amount of historical data needed that absolutely must be stored depends on the plausible depth of a blockchain reorganization. The longest reorganization ever experienced on the main network was 24 blocks during the infamous March 11, 2013 chain fork.
  • Why the Trolls Will Always WinBut here’s the key: it turned out he wasn’t outraged about my work. His rage was because, in his mind, my work didn’t deserve the attention. Spoiler alert: “deserve” and “attention” are at the heart.
Categories: FLOSS Project Planets

Bryan Pendleton: Academic research on VCS approaches

Sun, 2014-10-12 10:56

I've been spending my time recently reading some interesting academic research papers regarding the different workflows and behaviors that arise in DVCS systems vs CVCS systems, and thought I'd share some links.

I'm not tremendously impressed with the level of sophistication of academic research into VCS functionality, but it does seem to be slowly improving and these recent papers have some interesting observations.

The best of the bunch, I think, are the papers from Christian Bird of Microsoft, which is perhaps no surprise because the industrial side of Microsoft has been doing some of the best commercial work in VCS systems recently, and Microsoft certainly has experience dealing with the issues that matter to software developers.

  • Work Practices and Challenges in Pull-Based Development: The Integrator’s PerspectiveIn the pull-based development model, the integrator has the crucial role of managing and integrating contributions. This work focuses on the role of the integrator and investigates working habits and challenges alike. We set up an exploratory qualitative study involving a large-scale survey involving 749 integrators, to which we add quantitative data from the integrator’s project. Our results provide insights into the factors they consider in their decision making process to accept or reject a contribution.
  • Will My Patch Make It? And How Fast? : Case Study on the Linux KernelThe Linux kernel follows an extremely distributed reviewing and integration process supported by 130 developer mailing lists and a hierarchy of dozens of Git repositories for version control. Since not every patch can make it and of those that do, some patches require a lot more reviewing and integration effort than others, developers, reviewers and integrators need support for estimating which patches are worthwhile to spend effort on and which ones do not stand a chance.
  • Social Coding in GitHub: Transparency and Collaboration in an Open Software RepositoryBased on a series of in-depth interviews with central and peripheral GitHub users, we examined the value of transparency for large-scale distributed collaborations and communities of practice. We find that people make a surprisingly rich set of social inferences from the networked activity information in GitHub, such as inferring someone else’s technical goals and vision when they edit code, or guessing which of several similar projects has the best chance of thriving in the long term.
  • How Do Centralized and Distributed Version Control Systems Impact Software Changes?In this paper we present the first in-depth, large scale empirical study that looks at the influence of DVCS on the practice of splitting, grouping, and committing changes. We recruited 820 participants for a survey that sheds light into the practice of using DVCS.
  • Cohesive and Isolated Development with BranchesThe adoption of distributed version control (DVC), such as Git and Mercurial, in open-source software (OSS) projects has been explosive. Why is this and how are projects using DVC? This new generation of version control supports two important new features: distributed repositories and histories that preserve branches and merges. Through interviews with lead developers in OSS projects and a quantitative analysis of mined data from the histories of sixty project, we find that the vast majority of the projects now using DVC continue to use a centralized model of code sharing, while using branching much more extensively than before their transition to DVC.
  • Expectations, Outcomes, and Challenges Of Modern Code ReviewWe empirically explore the motivations, challenges, and outcomes of tool-based code reviews. We observed, interviewed, and surveyed developers and managers and manually classified hundreds of review comments across diverse teams at Microsoft. Our study reveals that while finding defects remains the main motivation for review, reviews are less about defects than expected and instead provide additional benefits such as knowledge transfer, increased team awareness, and creation of alternative solutions to problems.
  • Collaboration in Software Engineering: A RoadmapSoftware engineering projects are inherently cooperative, requiring many software engineers to coordinate their efforts to produce a large software system. Integral to this effort is developing shared understanding surrounding multiple artifacts, each artifact embodying its own model, over the entire development process.
  • Is It Dangerous to Use Version Control Histories to Study Source Code Evolution?This allows us to answer: How much code evolution data is not stored in VCS? How much do developers intersperse refactorings and edits in the same commit? How frequently do developers fix failing tests by changing the test itself? How many changes are committed to VCS without being tested? What is the temporal and spacial locality of changes?
  • The Secret Life of Patches: A Firefox Case StudyIn this paper, we study the patch lifecycle of the Mozilla Firefox project. The model of a patch lifecycle was extracted from both the qualitative evidence of the individual processes (interviews and discussions with developers), and the quantitative assessment of the Mozilla process and practice. We contrast the lifecycle of a patch in pre- and post-rapid release development.
  • Towards a taxonomy of software changePrevious taxonomies of software change have focused on the purpose of the change (i.e. the why) rather than the underlying mechanisms. This paper proposes a taxonomy of software change based on characterizing the mechanisms of change and the factors that influence these mechanisms.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-11

Sat, 2014-10-11 18:58
Categories: FLOSS Project Planets

Bryan Pendleton: The Age of Miracles: a very short review

Sat, 2014-10-11 10:34

A traveling friend, passing through, gave us her copy of Karen Thompson Walker's The Age of Miracles: A Novel when she left.

The book is narrated by Julia, an 11 year old girl in sixth grade.

That's a rough time for a child: social issues; puberty and adolescence; the realization that you're not a child anymore.

When you are 11 years old, and in sixth grade, it seems like everything is changing; it seems like the world as you know it is ending.

But what if, in fact, everything is changing?

And the world as you know it is ending?

I enjoyed Walker's book and zipped right through it, and am now giving it to someone else to enjoy.

Categories: FLOSS Project Planets