Planet Apache

Syndicate content
Updated: 11 hours 46 min ago

Luciano Resende: Disable "Remove trailing whitespace" defaults in Atom editor

Thu, 2015-03-12 10:51
If you are using text editor, you might have noticed that it automatically removes all the trailing white space on your code. This can be very annoying when you are working on changes, and the formatting changes are getting mixed with logic changes. To disable this, update the config.cson file in your home directory :
cd ~/.atom
vi config.cson

'removeTrailingWhitespace': false


After restarting your editor app, you should be all set.
Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-11

Wed, 2015-03-11 18:58
Categories: FLOSS Project Planets

Adrian Sutton: Patterns are for People

Wed, 2015-03-11 17:01
Avdi Grimm in Patterns are for People:Patterns aren’t tools for programming computers; they are tools for programming people. As such, to say that “patterns are a language smell” makes very little sense. Patterns are a tool for augmenting language: our language, the language we use to talk to each other about a problem and its solutions; the language we use to daydream new machines in our minds before committing them to code. Patterns aren’t a language smell; rather, patterns are by definition, [optional] language features. There’s so much we can learn if only we stop making everything into a battle between “new” tools or approaches and “old” ones. Patterns are relevant and useful in all forms of programming and we shouldn’t discard them just because they’re not “cool” anymore. Sadly we seem to throw out nearly all our experience and learning each time a new approach comes along rather than learning from and taking advantage of both.
Categories: FLOSS Project Planets

Sergey Beryozkin: Camel CXFRS Improvements

Wed, 2015-03-11 12:51
Camel CXFRS is one of the oldest Camel components which was created by Willem Jiang, my former colleague back from IONA Technology days, and maintained by Willem since its early days.

Camel is known to be a very democratic project with respect to supporting all sort of components, and it has many components that can deal with HTTP invocations. CXFRS is indeed just one of them but as you can guess from its name it is dedicated to supporting HTTP endpoints and clients written on top of Apache CXF JAX-RS implementation.

I think that over the years CXFRS has got a bit of the mixed reception from the community,  may be because it was not deemed that ideal for supporting some styles of routing for which other lighter Camel HTTP aware components were good at.

However CXFRS has been used by some developers and it has been significantly improved recently with respect to its usability. I'd like though to touch on the very last few updates which can be of interest.

The main CXFRS feature which appears to be quite confusing initially is that a CXFRS endpoint (Camel Consumer)  does not actually invoke on the provided JAX-RS implementation. This appears to be rather strange but this is what actually helps to integrate CXF JAXRS into Camel. The JAX-RS runtime is only used to prepare all the data according to JAX-RS Service method signatures but not invoke the actual service but make all the data needed available to custom Camel processors which extract these data from Camel exchanges and make some next routing decisions.

The side-effect of it that in some cases once can not actually just take an existing JAX-RS service implementation and plug it into a Camel route. Unless one use a CXFRS Bean component that can route from Jetty endpoints to CXF JAX-RS service implementation. This approach works but requires another Camel (Jetty only) component with an absolute HTTP address and has a few limitations of its own.

So the first improvement is that starting from Camel 2.15.0 one can configure a CXFRS consumer with a 'performInvocation=true' option and it will actually invoke on the service implementation, set a JAX-RS response on the Camel  exchange and will route to the next custom processor as usual, except that in this case the custom processor will have all the input parameters as before but also a response ready - the processors now can customize the response or do whatever else they need to do. It also makes it much simpler to convert the existing CXF Spring/Blueprint JAX-RS declarations  with the service implementations into Camel CXFRS endpoints if needed.

Note that in a default case one typically provides a no-op CXFRS service implementation (recall, CXFRS does not invoke on the service by default, only needs the method signatures/JAX-RS metadata). Providing interfaces only makes it more logical given that the invocation is not done by default, in fact it is possible for URI-only CXFRS consumer style which is rather limited in what it can do. So the other minor improvement is that starting from Camel 2.15.0 one can just prepare a JAX-RS interface and use it with CXFRS Consumer unless a new 'performInvocation' option is set in which case a complete implementation is needed.

The next one is the new "propagateContexts" configuration option. What it does is that it allows CXFRS developers write their custom processors against JAX-RS Context API, i.e, they can extract one of JAX-RS Contexts such as UriInfo, SecurityContext, HttpHeaders as a typed Camel exchange property and work with these contexts to figure out what needs to be done next. This should be a useful option indeed as JAX-RS Context API is very useful indeed.

Finally, a CXF No Annotations Feature is now supported too, CXFRS users can link to a CXF Model document and use it to JAX-RS enable a given Java interface without JAX-RS annotations. In fact, starting from Camel 2.15.0 it is sufficient to have a model-only CXFRS Consumer without a specific JAX-RS service interface or implementation - in this case custom processors will get the same request data as usual, with the model serving as the source binding the request URI to a set of request parameters.

We hope to build upon this latest feature going forward with other descriptions supported, to have a model-only CXFRS consumer more capable.

Enjoy !

Categories: FLOSS Project Planets

Claus Ibsen: In the year 2015 - Apache Camel 2.15.0 was released

Wed, 2015-03-11 02:17
So it will be known to mankind that in the first quarter of the year 2015, the awesome Apache Camel 2.15.0 was released.

Apache Camel 2.15.0 was released in 2015Top 10 of great new featuresI will try to do a top 10 of new functionality this release brings to the table.#1 - Self documentedThis release brings together, the steps we have taken over the last couple of released, in terms of being self documented. So what does this mean?
I have previous blogged about the work we have done
So what does this mean? It means that the same documentation Camel provides for all its:
  • Java and XML DSLs
  • Enterprise Integration Patterns
  • Camel Components
  • Camel Data formats
  • Camel languages
Is now included at both design and runtime. So for example if you design Camel routes in either Java or XML then the documentation is at your fingertips. For Java its documented as javadoc, and for XML that same documentation is now injected into the Camel XSD schemas, which allows IDE tooling to show you that documentation.
Here is a screenshot of IDEA editing a Camel route in XML and showing the documentation for the splitter EIP.
Camel XML now includes EIP documentation out of the boxAt runtime your Camel applications and routes can present the documentation as well, but with the extra twist that the documentation can overlay with the runtime configuration in use. In other words we can now show what all those EIP options, component options, endpoint options, and whatnot means, and what their current values are, and what their default values are, whether they are required or options, which type they are, and if they are enum based, what options the can be, and much more.
All that comes together nicely if using the latest hawtio web console, which can present this nicely as the next screenshot shows
hawtio showing EIP endpoints at runtime with current values and documentationTo access the documentation at runtime, then Camel offers Java, JMX and Camel commands as the APIs. hawtio uses the JMX API.
#2 - Categorized componentsAll the Camel components have now been categorized, which means we can group the components, for example to know which Camel components there is for SQL, social, messaging, file, and so on. That is not all, we have also categorized all the EIPs, data formats as well.
And that information is available at both design and runtime, just as with the documentation. And this information can be reached using Java, JMX, and Camel commands. For example the Camel commands now has a set of Camel catalog commands that can be used, to find out which components we have for social, database, etc. The following screen shot shows the Camel commands in action using the new Camel Catalog component to do just that.
Camel commands to list Camel components by group
#3 - Camel Catalog for ToolingThere is a new standalone camel-catalog JAR which is intended for third party tool developers (and also Camel itself). The JAR contains a list of all Camel eips, components, data formats, languages, Camel XSD schemas, and the Maven archetypes. And also full documentation for all those as well. In other words the camel-catalog JAR is an offline JAR that has all the information we have talked about in #1 and #2. The JAR has Java and JMX API, as well .properties, .xsd, and .json schema files for all that. So its easy for tools to tap into that information.
Apache Camel uses the catalog itself, in the Camel commands as shown in the screenshot above.
As mentioned before there is a set of new Camel commands being developed for JBoss Forge, which is intended for making development with Apache Camel easier. You can say that this is an example of a third party developing Camel tooling. There is commands to make it easier to add new Camel components, configure endpoints, and whatnot. What makes JBoss Forge awesome is that it allows to run the same binary commands (forge addon) in any of its supported environments, which are
- command line (eg CLI)- Eclipse- Netbeans- IDEA- web
So for example we have a Camel command to add a new component that allows to filter by groups. The following screenshot shows which components we have that is about files.
Camel Forge commands running in Eclipse, as a wizard to add a Camel component to your projectAnd the same command used from web (work in progress)
Camel Forge command running in web browser, as a wizard to add a Camel component to your project
#4 - Reuse Camel commandsIn previous releases of Apache Camel, the Camel commands was only for Karaf users. But in this release we have made these commands reuseable, in the camel-commads-core module. And there is an out of the box implementation for Karaf and Jolokia. 
And for third party there is JBoss Forge Camel commands as part of the fabric8 project. By using these commands you can manage Camel applications at runtime, for example to check the state of your routes, and so on.
The screenshot below shows how to use the camel-routes command from JBoss Forge. The same command can be run from Apache Karaf as well.
Camel commands connecting to a remote JVM using jolokia to check the route statusThe Karaf commands only works in the same JVM as Karaf itself. But the jolokia commands allows to manage remote JVMs as well. In the screenshot above I have a local running JVM with Apache Tomcat and deployed the standard Camel Tomcat Servlet example. And from a JBoss Forge JVM I use the commands to connect to the Tomcat JVM and list all its running Camel routes. If you are familiar with hawtio then its the same principle, as hawito is able to do that same as well.

#5 - Camel Boot / Spring BootFor the micro services trend we have Camel boot supported, though most noticeable by the camel camel-spring-boot module. Henryk Konsek did a great job implementing and documenting this module, so there is plenty of material there to run Camel and Spring happily together.
For Camel Boot we are improving on alternatives with CDI, Guice, and plain Java in the upcoming release. For CDI in particular there is works in a much improved and working camel-cdi, which is expected to be included in Camel 2.16.

#6 - Improved swagger The camel-swagger module now supports any kind of runtime (before it was tied to Spring). In the next release we will add support for hosting multiple Camel applications from the same camel-swagger module, so you can deploy it once, and use it for all your Camel apps in the same JVM.
We are also hoping the swagger team finish their highly anticipated pure Java implementation, so we can drop Scala and make camel-swagger more friendly to use / and much lighter without the Scala libraries.

#7 - Improved Rest DSLWe have improved and fixed some issues with the new Rest DSL that was introduced in Camel 2.14 based on community feedback. For example its easier to configure using custom jackson modules for json. We also made it easier to return custom error messages when routing has failed etc.
We will continue to do so as we have still some outstanding work. 

#8 - Routing Engine optimizationsWe also optimized Camel itself in particular two areas:
  • Optimized the case insensitive headers, but dropping the need for an internal shadow map.
  • Optimized the type converter to avoid conversion attempts that was not needed, which in some cases reduced the factor by 2-4 x during routing.

#9 - More detailed information about inflight messagesCamel now tracks inflight messages with more details, which allows to pin point exactly where in the routes the inflight messages current is being processed at. This information can be accessed from Java and JMX. And there is also a Camel command to list inflight messages. Also hawtio has a web page that list all the inflight messages with all that information as well.
hawtio showing inflight exchanges for the selected route. Notice the message is currently at "delay1" in the route
And the Route JMX MBean now also include a JMX attribute that contains the oldest inflight duration, which makes it easier for monitor tooling to quickly identify which routes may have messages that may "take too long time".
Also during shutdown of Camel itself, in case of the graceful shutdown was not able to shutdown and complete all inflight exchanges, then Camel now logs which exchanges are still inflight, and where in the routes they currently are.
#10 - Configuring endpoints in XML with long attributes can now break linesWhen using XML for configuring Camel routes, you may use endpoints that has many options you need to configure, which then may make the endpoint uri long and less human readable. What you can do now, is to use line breaks in the attribute as shown below
Configuring endpoints in XML can now use line breaks (you can have multiple options in the same line)An alternative is also that the less known can be used to configure endpoints, which now support style as well, as shown:
Configuring endpoints using property style

#11 - And as always a bunch of new componentsAnd as usual we have a bunch of new components. This time there is 16 new components, for example for the talk of the town, Docker. And for Spring 4.0.x users there is a working camel-spring40-test module for testing. As camel-spring-test requires Spring 4.1.x.

You can find more details in the Apache Camel 2.15.0 release notes. And make sure to read the bottom of the release notes that details what to consider when upgrading.
The release can be downloaded from maven central, and the Apache Camel website.
Categories: FLOSS Project Planets

Ruwan Linton: API Director - Enterprise API Management Solution

Tue, 2015-03-10 23:05
We at AdroitLogic have been working on an API Management solution for nearly 2 years now and finally we have announced the GA release of the API Director.

The main difference of the API Director with compared to the existing API Management solutions is that it is an Enterprise API Management Solution, we focused very little on the social aspects of the product, and concentrated more and more on the enterprise requirements.

An organization that wants their internal services to be exposed as APIs do not really look for social features, they want a centralized governance of the APIs or the entry points to their SOA infrastructure. The API Management solution should be capable of providing the common infrastructure functions to the APIs, making the service author concentrate only on the actual business logic.

The AdroitLogic API Director solution provides a multi organization structure, however it only allows a single organization to be publishing APIs, where as all the rest of the organizations are treated as consumers organizations, who can consume the exposed APIs by the Publisher organization users. This is exactly what we found as the use-case of most of the enterprise wide customers.

The AdroitLogic API Director is composed of a graphical policy editor using which the publisher user can configure the policies for the request and response flows of the API calls. This policy framework facilitates most of the key functionalities such as SOAP to REST (Text or JSON) and most of the other transformations, security, routing, caching, throttling etc.. However it is written in an easily extensible manner using an Annotation driven user interface generation unit, making it very easy to plug new functionalities into this graphical policy editor.

The Publisher defines a set of Tiers that can be associated into published APIs, and the consumer should be using the associated tiers to subscribe to the API with an entity called an Application which is created by the consumer.

The API visibility for the consumers are driven by something called a Circle which is similar to the Google Circles, but under the domain of APIs.

API publisher can select a set of circles to which this API is published at the API creation time and all consumers belongs to a set of circles, and the APIs published for a given circle can only be seen by the consumers of that circle, under the consumer view API Store.

We have taken an extra effort to make sure the product by default provides sensible statistics about the APIs to the publisher and usage to the consumers. The statistics is driven by ElasticSearch and the graphs are rendered using the Google Graphs.
API Director is a commercial product unlike UltraESB which is open source, however the API Director's message processing engine is driven by the UltraESB providing the performance for the HTTP/S transport traffic. You may evaluate the consumer view from the hosted API Director Preview or Contact AdroitLogic for more information and evaluation of the product.
Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-10

Tue, 2015-03-10 18:58
  • Epsilon Interactive breach the Fukushima of the Email Industry (CAUCE)

    Upon gaining access to an ESP, the criminals then steal subscriber data (PII such as names, addresses, telephone numbers and email addresses, and in one case, Vehicle Identification Numbers). They then use ESPs’ mailing facility to send spam; to monetize their illicit acquisition, the criminals have spammed ads for fake Adobe Acrobat and Skype software. On March 30, the Epsilon Interactive division of Alliance Data Marketing (ADS on NASDAQ) suffered a massive breach that upped the ante, substantially.  Email lists of at least eight financial institutions were stolen.  Thus far, puzzlingly, Epsilon has refused to release the names  of compromised clients. [...] The obvious issue at hand is the ability of the thieves to now undertake targeted spear-phishing problem as critically serious as it could possibly be.

    (tags: cauce epsilon-interactive esp email pii data-protection spear-phishing phishing identity-theft security ads)

  • In Ukraine, Tomorrow’s Drone War Is Alive Today

    Drones, hackerspaces and crowdfunding:

    The most sophisticated UAV that has come out of the Ukrainian side since the start of the conflict is called the PD-1 from developer Igor Korolenko. It has a wingspan of nearly 10 feet, a five-hour flight time, carries electro-optical and infrared sensors as well as a video camera that broadcasts on a 128 bit encrypted channel. Its most important feature is the autopilot software that allows the drone to return home in the event that the global positioning system link is jammed or lost. Drone-based intelligence gathering is often depicted as risk-free compared to manned aircraft or human intelligence gathering, but, says Korolenko, if the drone isn’t secure or the signature is too obvious, the human coasts can be very, very high. “Russian military sometimes track locations of ground control stations,” he wrote Defense One in an email. “Therefore UAV squads have to follow certain security measures – to relocate frequently, to move out antennas and work from shelter, etc. As far as I know, two members of UAV squads were killed from mortar attacks after [their] positions were tracked by Russian electronic warfare equipment.” (via bldgblog)

    (tags: via:bldgblog war drones uav future ukraine russia tech aircraft pd-1 crowdfunding)

  • Javascript Acid Machine

    a 303 and an 808 in your browser. this is deadly

    (tags: acid 303 music javascript hacks via:hn techno)

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-09

Mon, 2015-03-09 18:58
  • Ubuntu To Officially Switch To systemd Next Monday – Slashdot

    Jesus. This is going to be the biggest shitfest in the history of Linux…

    (tags: linux slashdot ubuntu systemd init unix ops)

  • uselessd

    A project to reduce systemd to a base initd, process supervisor and transactional dependency system, while minimizing intrusiveness and isolationism. Basically, it’s systemd with the superfluous stuff cut out, a (relatively) coherent idea of what it wants to be, support for non-glibc platforms and an approach that aims to minimize complicated design. uselessd is still in its early stages and it is not recommended for regular use or system integration. This may be the best option to evade the horrors of systemd.

    (tags: init linux systemd unix ops uselessd)

  • Japan’s Robot Dogs Get Funerals as Sony Looks Away

    in July 2014, [Sony's] repairs [of Aibo robot dogs] stopped and owners were left to look elsewhere for help. The Sony stiff has led not only to the formation of support groups–where Aibo enthusiasts can share tips and help each other with repairs–but has fed the bionic pet vet industry. “The people who have them feel their presence and personality,” Nobuyuki Narimatsu, director of A-Fun, a repair company for robot dogs, told AFP. “So we think that somehow, they really have souls.” While concerted repair efforts have kept many an Aibo alive, a shortage of spare parts means that some of their lives have come to an end.

    (tags: sony aibo robots japan dogs pets weird future badiotday iot gadgets)

  • “Cuckoo Filter: Practically Better Than Bloom”

    ‘We propose a new data structure called the cuckoo filter that can replace Bloom filters for approximate set membership tests. Cuckoo filters support adding and removing items dynamically while achieving even higher performance than Bloom filters. For applications that store many items and target moderately low false positive rates, cuckoo filters have lower space overhead than space-optimized Bloom filters. Our experimental results also show that cuckoo filters outperform previous data structures that extend Bloom filters to support deletions substantially in both time and space.’

    (tags: algorithms paper bloom-filters cuckoo-filters cuckoo-hashing data-structures false-positives big-data probabilistic hashing set-membership approximation)

  • Amazing cutting from Vanity Fair, 1896, for International Women’s Day

    “The sisters make a pretty picture on the platform ; but it is not women of their type who need to assert themselves over Man. However, it amuses them–and others ; and I doubt if the tyrant has much to fear from their little arrows.” Constance Markievicz was one of those sisters, and the other was Eva Gore-Booth.

    (tags: markievicz history ireland sligo vanity-fair 19th-century dismissal sexism iwd women)

  • Anatomy of a Hack

    Authy doesn’t come off well here: ‘Authy should have been harder to break. It’s an app, like Authenticator, and it never left Davis’ phone. But Eve simply reset the app on her phone using a address and a new confirmation code, again sent by a voice call. A few minutes after 3AM, the Authy account moved under Eve’s control.’

    (tags: authy security hacking mfa authentication google apps exploits)

  • Ask the Decoder: Did I sign up for a global sleep study?

    How meaningful is this corporate data science, anyway? Given the tech-savvy people in the Bay Area, Jawbone likely had a very dense sample of Jawbone wearers to draw from for its Napa earthquake analysis. That allowed it to look at proximity to the epicenter of the earthquake from location information. Jawbone boasts its sample population of roughly “1 million Up wearers who track their sleep using Up by Jawbone.” But when looking into patterns county by county in the U.S., Jawbone states, it takes certain statistical liberties to show granularity while accounting for places where there may not be many Jawbone users. So while Jawbone data can show us interesting things about sleep patterns across a very large population, we have to remember how selective that population is. Jawbone wearers are people who can afford a $129 wearable fitness gadget and the smartphone or computer to interact with the output from the device. Jawbone is sharing what it learns with the public, but think of all the public health interests or other third parties that might be interested in other research questions from a large scale data set. Yet this data is not collected with scientific processes and controls and is not treated with the rigor and scrutiny that a scientific study requires. Jawbone and other fitness trackers don’t give us the option to use their devices while opting out of contributing to the anonymous data sets they publish. Maybe that ought to change.

    (tags: jawbone privacy data-protection anonymization aggregation data medicine health earthquakes statistics iot wearables)

  • Pinterest’s highly-available configuration service

    Stored on S3, update notifications pushed to clients via Zookeeper

    (tags: s3 zookeeper ha pinterest config storage)

  • A Journey into Microservices | Hailo Tech Blog

    Excellent three-parter from Hailo, describing their RabbitMQ+Go-based microservices architecture. Very impressive!

    (tags: hailo go microservices rabbitmq amqp architecture blogs)

  • soundcloud/lhm

    The Large Hadron Migrator is a tool to perform live database migrations in a Rails app without locking.

    The basic idea is to perform the migration online while the system is live, without locking the table. In contrast to OAK and the facebook tool, we only use a copy table and triggers. The Large Hadron is a test driven Ruby solution which can easily be dropped into an ActiveRecord or DataMapper migration. It presumes a single auto incremented numerical primary key called id as per the Rails convention. Unlike the twitter solution, it does not require the presence of an indexed updated_at column.

    (tags: migrations database sql ops mysql rails ruby lhm soundcloud activerecord)

  • Biased Locking in HotSpot (David Dice’s Weblog)

    This is pretty nuts. If biased locking in the HotSpot JVM is causing performance issues, it can be turned off:

    You can avoid biased locking on a per-object basis by calling System.identityHashCode(o). If the object is already biased, assigning an identity hashCode will result in revocation, otherwise, the assignment of a hashCode() will make the object ineligible for subsequent biased locking.

    (tags: hashcode jvm java biased-locking locking mutex synchronization locks performance)

Categories: FLOSS Project Planets

Ortwin Glück: [Code] Do not use Xerces, Xalan any more!

Mon, 2015-03-09 05:55
For years Java developers were used to require Apache Xerces and Xalan for their XML stuff. But those times are over. All this stuff is no longer necessary in modern JDKs (since at least 1.6 basically) and it does more harm than good to have these jars in your classpath or endorsed libs!

The problem is that the Internet is littered with references to Xerces and that even their Website is totally misleading. If you read that stuff you would believe that this is actually usefull code. But it isn't any more!

The first sentence on the website is "Welcome to the future!". But that's from at least 2010! The classes in xml-apis.jar are from 2009! All the code is build for JDK 1.3! Come on! Anybody who is still running 1.3 already has a lot of problems with the memory model / thread synchronization, let alone security issues! xml-apis contains ancient versions of JDK standard classes. For instance XMLStreamException doesn't even initialize the Exception's cause field, getting you nothing but incomplete stack traces.

I really wonder why this project can't get around to admit that it is effectively dead. They should really put a big fat warning on their website:


But no! All that 20 year old documentation still tells people to put that crap into the endorsed folder of the JDK where it will turn a perfectly good XML infrastructure into a jurassic version of itself.

These people don't realize how much worldwide developer time they are wasting by keeping this outdated website up!

Please also note that Xerces has security issues when exposed unconfigured to the Internet (webservices!), whereas JDK's built-in JAXP has sane defaults.

If your project still has Xerces, you should migrate away from it now. Chances are that you hard-coded some stuff on top of Xerces (like validation, feature strings, parser options, etc.). Use the modern JAXP equivalents!
Categories: FLOSS Project Planets

Tom White: Tennis Ball Parabola

Sun, 2015-03-08 16:28

Here's an image of me throwing a tennis ball to Lottie:

Millie filmed the video and edited it down to a shorter segment. I turned the resulting video frames into a series of JPEGs by running:

ffmpeg -i Tennis\ Ball.mp4 tennis-%03d.jpeg

Then I composed them into a single image using ImageMagick:

convert -compose lighten tennis-014.jpeg tennis-015.jpeg \
-composite tennis-016.jpeg \
-composite tennis-017.jpeg \

-composite tennis-043.jpeg \-composite result.jpeg

Millie then used Desmos (an online graphing editor) to superimpose a parabola on the image.

Update: Dima Spivak suggested I use the picture to estimate g, the acceleration due to gravity.
  • My head measures 0.22 m (chin to crown), and is 49 pixels on the picture.
  • The vertical distance, d, from the highest ball to the ball above Lottie's hands is 204 pixels, or 0.916 m.
  • The time, t, it took to travel this distance was between 12 and 13 frames (it's hard to say more precisely than this from the picture), which at 29.97 frames per second is between 0.4 and 0.434 seconds.
The acceleration is 2d/t2, which works out at between 9.7 and 11.4 m/s2. This range contains the accepted value of g, which is 9.8 m/s2.

Categories: FLOSS Project Planets

Community Over Code: Three key elements defining any open source project

Sun, 2015-03-08 13:59

Open source has come a long way in the past 30 years and is entering the consciousness of most modern cultures. When thinking of open source projects, people categorize them several ways: governance structure, type of product platform, programming language, utility, technical details (language written in), industry sponsored or fully independent, and more.

But what truly defines any open source project, making it a unique entity different from all other open source projects? I would propose that there are three key elements of any open source project that frame, define, and differentiate that project from all others: the code, the community, and the brand.

The code

Code is king. Code is what makes a product do something, and that’s why open source projects exist in the first place: to build something useful. Technologists get excited about what the code does, and how it does what it does. Marketers get excited about how the product will solve their customers’ problems. Code is what most people are looking for when they’re searching for an open source project to use.

Sounds simple enough—so why don’t we define an open source project purely based on its code? As anyone who has worked in software development already knows, code is ever-changing and ephemeral. In the open source frontier, free from the traditional controls of corporate-led projects, “the code” can get very hard to follow: open source code is infinitely forkable. Once your code is checked in under an Open Source Initiative (OSI) license to a public repository, it is fully accessible to anyone and everyone to take—and modify—for their own purposes. Once another user forks the code from your project and makes a slight modification, it is no longer officially part of your original project.

The community

If code is the “what” of a project, then the community is the “who” of the project—the people who make it all happen. The core community of a project includes anyone actively involved in moving the project forward, such as the engineers writing the code and the end users who provide feedback or request specific modifications. The overall community also includes people who don’t check code, but provide support such as governance/process oversight, public relations/marketing, training, or financial or employment support. The social norms, the etiquette, and the ethos of the community help to differentiate a project from all others.

While participation in an open source project may be part of some individuals’ paid employment (e.g. a corporate-employed software engineer assigned to work on an open source project for a certain percentage of her or his time), most open source community members participate voluntarily with no direct connection to their paycheck. So members tend to come and go as their interest or their other commitments wax and wane, or as their employer changes strategy. Like the code, the community is ever-changing.

Unlike a corporate software development project that can plan on having employees with certain skills available to assign to do specific work, the participation in an open source community is unpredictable and often beyond the control of the project. Personality conflicts arise and may result in highly skilled contributors leaving the community more readily than they would leave a paid employment. But the benefits of an open community can be seen in the enthusiasm and drive of many community members, and in the longevity of successful project communities, and in syncrhonicity and great work forward on the code.

The brand

Brand is how the world outside of an open source project learns about that project. When individuals or companies are deciding which project to use or to invest in, branding helps them differentiate projects that offer similar functionality. Of course they will consider other details, but it is much easier to think, “Do I want to support Hadoop, with the yellow elephant?” rather than “Do I want to support Cloudera’s CDH or the Hortonworks Data Platform, or the newly announced ODP?”

“The brand” includes many things: the official name of the project, a logo for the project or product, and even the appearance of the project’s website and your product’s UI. Some branding components in particular are legal trademarks: these typically include the official software product name and logo, although trademarks are strongest when used consistently.

Unlike code and community, a project’s brand is not ever-changing or ephemeral. A trademark cannot be forked without legal permission, and a project brand can stay consistent even when community membership fluctuates. In many ways, the brand and trademarks are the element of a project that can be most easily controlled and maintained. However, proper trademark usage can be overlooked—or underappreciated by the community inside of the project—as an important tool for defining a project’s unique character. Given that anyone can fork the code, and that community members come and go, a project’s brand and trademarks are crucial elements for maintaining a project’s longevity and independence, and to continue to draw in new project contributors.

This post also appears in the Apache Quill column of coordinated by Jason Hibbets.

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-07

Sat, 2015-03-07 18:58
  • A Zero-Administration Amazon Redshift Database Loader – AWS Big Data Blog


    (tags: lambda amazon aws redshift etl)

  • Archie Markup Language (ArchieML)

    ArchieML (or “AML”) was created at The New York Times to make it easier to write and edit structured text on deadline that could be rendered in web pages, or more specifically, rendered in interactive graphics. One of the main goals was to make it easy to tag text as data, without having type a lot of special characters. Another goal was to allow the document to contain lots of notes and draft text that would not be read into the data. And finally, because we make extensive use of Google Documents’s concurrent-editing features — while working on a graphic, we can have several reporters, editors and developers all pouring information into a single document — we wanted to have a format that could survive being edited by users who may never have seen ArchieML or any other markup language at all before.

    (tags: aml archie markup text nytimes archieml writing)

  • California Says Motorcycle Lane-Splitting Is Hella Safe

    A recent yearlong study by the California Office of Traffic Safety has found motorcycle lane-splitting to be a safe practice on public roads. The study looked at collisions involving 7836 motorcyclists reported by 80 police departments between August 2012 and August 2013. “What we learned is, if you lane-split in a safe or prudent manner, it is no more dangerous than motorcycling in any other circumstance,” state spokesman Chris Cochran told the Sacramento Bee. “If you are speeding or have a wide speed differential (with other traffic), that is where the fatalities came about.”

    (tags: lane-splitting cycling motorcycling bikes road-safety driving safety california)

  • Try Server

    Good terminology for this concept:

    The try server runs a similar configuration to the continuous integration server, except that it is triggered not on commits but on “try job request”, in order to test code pre-commit. See also for the Moz take on it.

    (tags: build ci integration try-server jenkins buildbot chromium development)

  • metrics-sql

    A Dropwizard Metrics extension to instrument JDBC resources and measure SQL execution times.

    (tags: metrics sql jdbc instrumentation dropwizard)

  • HP is trying to patent Continuous Delivery

    This is appalling bollocks from HP:

    On 1st March 2015 I discovered that in 2012 HP had filed a patent (WO2014027990) with the USPO for ‘Performance tests in a continuous deployment pipeline‘ (the patent was granted in 2014). [....] HP has filed several patents covering standard Continuous Delivery (CD) practices. You can help to have these patents revoked by providing ‘prior art’ examples on Stack Exchange. In fairness, though, this kind of shit happens in most big tech companies. This is what happens when you have a broken software patenting system, with big rewards for companies who obtain shitty troll patents like these, and in turn have companies who reward the engineers who sell themselves out to write up concepts which they know have prior art. Software patents are broken by design!

    (tags: cd devops hp continuous-deployment testing deployment performance patents swpats prior-art)

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-05

Thu, 2015-03-05 18:58
Categories: FLOSS Project Planets

Adrian Sutton: New Favourite Little Java 8 Feature

Thu, 2015-03-05 17:03

Pre-Java 8:

ThreadLocal<Foo> foo = new ThreadLocal() {
protected Foo initialValue() {
return new Foo();

Post-Java 8:

ThreadLocal<Foo> foo = ThreadLocal.withInitial(Foo::new);


Categories: FLOSS Project Planets

Shawn McKinney: The Seven Steps of Role Engineering

Thu, 2015-03-05 16:03
Defined Role Engineering is the process by which an organization develops, defines, enforces, and maintains role-based access control. RBAC is often seen as a way to improve security controls for access and authorization, as well as to enforce access policies such as segregation of duties (SoD) to meet regulatory compliance.  Nov 5, 2014 Enterprise Role Definition: Best Practices and Approach – Oracle Corporation (Blogs) Introduction

The Role engineering process takes us from a human readable policy syntax to what is loaded into an rbac engine and used by a given application or system.

Here we follow the ANSI RBAC (INCITS 359) spec which prescribes methods like:

This tutorial demonstrates role engineering techniques using standard rbac as applied to a sample set of security use cases.

1. Define Security Use Cases for the Application sample use case:

First we’ll describe the policies in a way humans can understand by writing them down as use cases translated from the diagram.

Use case definitions: [ legend: red – roles | blue – objects | green – operations ] use case #1 : User must authenticate before landing on the home page.

Every set of security use cases necessarily contains a logon step.  With rbac, it implies a call to the createSession api, where credentials are checked and roles activated.

use case #2 : User must be a Buyer before placing bid on auction or buying an item.

A role named ‘Buyers’ is granted rights to place bids and purchase auction items.  This use case requires a call to the checkAccess api which we’ll discuss later.

use case #3 : User must be a Seller to create an auction or ship an item purchased.

We add role ‘Sellers’, and grant the authority to create an auction and ship an item that has just been sold.

use case #4 : All Users may create an account and search items.

Another role, named ‘Users’, is granted the rights to create a new account and search a list of items up for auction.  It will be a base role that Buyers and Sellers both extend.

use case #5 : A particular user may be a Buyer, or a Seller, but never both simultaneously.

An interesting scenario to consider.  Perhaps a conflict of interest due to how they collide.  Not to worry, document and move on.

The dynamic separation of duty requirement wasn’t depicted inside our sample use case diagram.  We add it arbitrarily to illustrate how SoD may be incorporated into a role engineering process.  2. Define the Roles in the App

The use cases must now be converted into entity lists.  Though still human readable, the new data formats are aligned with RBAC.

Create a list of Role names.  Use names pulled from use cases 2, 3, and 4.

Add Roles:
  • role name: “Users” description: “Basic rights for all buyers and sellers”
  • role name: “Buyers” description: “May bid on and purchase items”
  • role name: “Sellers” description: “May setup auctions and ship items”

Buyers and Sellers inherit from Users as described in use case #4.

Add Role Inheritance Relationships

  • child name: “Buyers” parent name: “Users
  • child name: “Sellers” parent name: “Users

Don’t forget the role combination rule in use case #5:

Role Activation Constraint

  • role name: “Buyers
  • role name: “Sellers
  • type: “DYNAMIC
  • cardinality: “2
 3. Define the Resources and Operations

RBAC Permissions are mappings between objects and operations.  Here we list the perms described in use cases 2, 3 and 4.

Add Permissions:
  • object name: “Item” description: “This is a product that is available for purchase”
    • operation name: “search” description: “Search through list of items”
    • operation name: “bid” description: “Place a bid on a given product”
    • operation name: “buy” description: “Purchase a given product”
    • operation name: “ship” description: “Ships a given product after sale”
  • object name: “Auction” description: “Controls a particular online auction”
    • operation name: “create” description: “May start a new auction”
  • object name: “Account” description: “Each user must have one of these”
    • operation name: “create” description: “Ability to setup a new account”
 4. Map the Roles to the Resources and Operations

Get the list of grants.  Again, these mappings naturally derive from the use cases.

Grant Permissions to Roles:
  • role: “Buyers” object name: “Item” oper: “bid
  • role: “Buyers” object name: “Item” oper: “buy
  • role: “Sellers” object name: “Item” oper: “ship
  • role: “Sellers” object name: “Auction” oper: “create
  • role: “Users” object name: “Item” oper: “search
  • role: “Users” object name: “Account” oper: “create
 5. Create the RBAC policy load file

Hand to the analyst, trained in the particulars of the RBAC engine, the entity lists created in steps 2, 3 and 4.  That info will be hand keyed into graphical interface screen, or (better yet) translated to machine readable syntax and loaded automatically by recurrent batch job.

RoleEngineeringSample.xml – download from fortress git ... <addrole> <role name="Users" description="Basic rights for all Buyers and Sellers"/> <role name="Buyers" description="May bid on and purchase products"/> <role name="Sellers" description="May start auctions and ship items"/> </addrole> <addroleinheritance> <relationship child="Buyers" parent="Users"/> <relationship child="Sellers" parent="Users"/> </addroleinheritance> <addsdset> <sdset name="BuySel" setmembers="Buyers,Sellers" cardinality="2" setType="DYNAMIC" description="activate only one role of this set"/> </addsdset> <addpermobj> <permobj objName="Item" description="product for purchase" ou="p1" /> <permobj objName="Auction" description="Controls online auction" ou="p1" /> <permobj objName="Account" description="Each user must have one" ou="p1" /> </addpermobj> <addpermop> <permop objName="Item" opName="bid" description="Bid on a given product"/> <permop objName="Item" opName="buy" description="Purchase a given product"/> <permop objName="Item" opName="ship" description="Place a product up for sale"/> <permop objName="Item" opName="search" description="Search through item list"/> <permop objName="Auction" opName="create" description="May start a new auction"/> <permop objName="Account" opName="create" description="Ability to add account"/> </addpermop> <addpermgrant> <permgrant roleNm="Buyers" objName="Item" opName="bid"/> <permgrant roleNm="Buyers" objName="Item" opName="buy"/> <permgrant roleNm="Sellers" objName="Item" opName="ship"/> <permgrant roleNm="Sellers" objName="Auction" opName="create"/> <permgrant roleNm="Users" objName="Item" opName="search"/> <permgrant roleNm="Users" objName="Account" opName="create"/> </addpermgrant> <addorgunit> <orgunit name="u1" typeName="USER" description="Test User Org"/> <orgunit name="p1" typeName="PERM" description="Test Perm Org"/> </addorgunit> ...

At the completion of this step, the RBAC policy is loaded into the engine and ready for use.

 6. Add Permission Checks to Application

It’s time to get the developers involved because there will be code that looks (something) like this:

RbacSession session = getSession(); Permission permission = new Permission( "Item", "bid" ); return accessMgr.checkAccess( session, permission );

How to instrument the checks is up to you.  Obviously you won’t hard code the perms.  Observe proper isolation to decouple your app from the details of the underlying security system.  Favor declarative checks over programmatic.  The usual concerns.

A previous post offers ideas, if willing to get dirty hands :-)

Apache Fortress End-to-End Security Tutorial

 7. Assign Roles to the Users

Before users may access the app, they must be assigned one or more of the corresponding roles.  These relationships may be recorded into the RBAC datastore with a batch script (as below), but often times performed by GUI, e.g. during application enrollment.

Sample user to role assignment policy file ... <adduserrole> <userrole userId="johndoe" name="Buyers"/> <userrole userId="johndoe" name="Sellers"/> <userrole userId="ssmith" name="Buyers"/> <userrole userId="rtaylor" name="Sellers"/> </adduserrole> ...

The following diagram depicts the role to role and user to role relationships defined by this policy.

role-role and user-role relationships

RBAC Extra Credit A. What happens when johndoe signs on?

(Will both roles be activated into the RBAC session?)

No, one of the roles must be discarded due to SoD constraint.

begin johndoe logon trace using fortress console Enter userId: johndoe name :Sellers msg :validate userId [johndoe] failed activation of assignedRole [Sellers] validates DSD Set Name:BuySel Cardinality:2

RBAC allows the caller to select one or more roles during createSession.  So we may call with johndoe passing either of their assigned roles in the argument list.  This process is called selective role activation.  Thus John Doe may be a Buyer during one session and Seller the next, never both at once.

more on createSession: Session createSession(User user, boolean isTrusted) throws SecurityException The following attributes may be set when calling this method B. What if we want to prevent someone from being assigned both roles?  Use Static Separation of Duties instead ... <addsdset> <sdset name="BuySel2" setmembers="Buyers,Sellers" cardinality="2" setType="STATIC" description="assign only one role of this set"/> </addsdset> ... What happens now when we try to assign a user both roles? begin assignUser trace in fortress console Enter userId: janedoe Enter role name: sellers ERROR: assignUser failed SSD check, rc=5088 Role [sellers], SSD Set Name [buysel2], Cardinality:2 Remember, constraints applied at role…
  • assignment time: are Static Separation of Duties (SSD)
  • activation time: are Dynamic Separation of Duties (DSD)
C. What perms will a Buyer get?

When a user activates the Buyers role what perms will they have?

begin ssmith perm trace using fortress console **perm:1*** object name [Account] operation name [create] **perm:2*** object name [Item] operation name [search] **perm:3*** object name [Item] operation name [buy] **perm:4*** object name [Item] operation name [bid]  D. What about a Seller?

When a user activates the Sellers role what perms will they have?

begin rtaylor perm trace using fortress console **perm:1*** object name [Account] operation name [create] **perm:2*** object name [Item] operation name [search] **perm:3*** object name [Auction] operation name [create] **perm:4*** object name [Item] operation name [ship]  E. Why do Buyers and Sellers have common perms?

Notice that Buyers and Sellers both have access rights to Account.create and  The reason is both extend a common base role: Users.

F. The above use case is overly simplistic.  Can we do something more realistic?

The Apache Fortress End-to-End Security Tutorial delves into fine-grained access control of data tied to customers.  It demonstrates how to implement a complicated set of requirements, verified with automated selenium test cases.

User to Role Relations in Tutorial

user to role relations in security tutorial

Role to Permission Relations in Tutorial

role to permission relations in security tutorial

Categories: FLOSS Project Planets

Stefan Bodewig: Maven Coverall Plugin Doesn't Like Buildnumber Plugin

Thu, 2015-03-05 05:05

I wanted to track test coverage for XMLUnit for Java using Coveralls (could be better and I should do the same for XMLUnit.NET, but these are different stories).

In theory this is pretty easy, given that there already is a Travis CI build. I added the Jacoco and Coveralls plugins in a Maven profile and made Travis run the profile as an after_success script. But instead of publishing data, I was greeted by

[ERROR] Failed to execute goal org.eluder.coveralls:coveralls-maven-plugin:3.0.1:report (default-cli) on project xmlunit-parent: Unable to parse configuration of mojo org.eluder.coveralls:coveralls-maven-plugin:3.0.1:report for parameter timestamp: Cannot convert '1425416381951' to Date -> [Help 1]

Well, I never configured the timestamp for coveralls and I was pretty sure I wasn't setting a timestamp property, either. Searching around didn't lead to anything, there was no environment variable configured, strange. Finally it dawned on me that the buildnumber plugin I used in order to get the git hash into the jar manifests also sets a property named timestamp as a side-effect. So far I wasn't using it but rather for the manifest so I had ignored said property. With that

<plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>buildnumber-maven-plugin</artifactId> <version>1.3</version> <configuration> ... <timestampFormat>{0,date,yyyy-MM-dd HH:mm:ssa}</timestampFormat> ... </configuration> </plugin>

finally did the trick.

Categories: FLOSS Project Planets

Bryan Pendleton: Sorry, but now we know: you're a dog.

Wed, 2015-03-04 21:51

Twenty years ago, the New Yorker published Peter Steiner's wonderful send-up of the Internet, perhaps the greatest Internet cartoon ever written: On the Internet, nobody knows you're a dog

Well, twenty years have passed.

In his talk at this year's GDC, Raph Koster picks up the story, and Gamasutra's Leigh Alexander summarizes it for us

We now live in an age where the internet filters results for you based on assumptions about what you're like drawn from geographic location or other patterns. This creates a phenomenon called a "filter bubble," says Koster, where increasingly one's perception of the world is led by online targeting. Your average online user will increasingly see only those news sources and even political candidates that agree with their own views -- and anyone who's ever Facebook-blocked a relative with offensive political views has become complicit in this sort of filtering.


Without clear lines and well-bounded communities, people can become confused in a way that leads to conflict. For example, with Kickstarter and Early Access games, users become hostile and controversies arise because the distinction between a "customer" and a "funder", a creator and a user, is indistinct; confusion about different types of roles and relationships within the game industry gave rise to last year's "controversy that shall not be named."

When you look at modern communities, from Twitter and Reddit to Facebook and chan boards, all the best practices -- keeping identity persistent, having a meaningful barrier to entry, specific roles, groups well-bordered and full anonymity at least somewhat difficult -- have been thrown out the window, giving rise to toxicity online, the veterans say.

As Koster notes on his blog, the discussion continues across the gaming community.

But it's not just gaming. For that, we turn to the always-fascinating, if complex, Don Marti, with his latest note: Digital dimes in St. Louis.

Don points to Jason Kint's recent article: Unbridled Tracking and Ad Blocking: Connect the Dots.

As part of my presentation, I shared a couple of charts that caused a bit of a stir among the audience of media execs charged with leading their organizations digital media ad sales businesses. The fact that these particular slides triggered such a reaction struck me as particularly timely because later that day the White House released its proposal for a Consumer Privacy Bill of Rights, which would require companies to clearly communicate their data practices to consumers and give them more control over what information is collected and how it is used.

Don wonders what all this tracking technology is destroying:

So how do we keep the local papers, the people who are doing the hard nation-protecting work of Journalism, going?

Where is all this rooted? Koster suggests that it's that gruesome and awful creation of technology, the filter bubble that Eli Pariser identified some five years ago:

For example, on Google, most people assume that if you search for BP, you’ll get one set of results that are the consensus set of results in Google. Actually, that isn’t true anymore. Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google “BP,” one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill. Presumably that was based on the kinds of searches that they had done in the past. If you have Google doing that, and you have Yahoo doing that, and you have Facebook doing that, and you have all of the top sites on the Web customizing themselves to you, then your information environment starts to look very different from anyone else’s. And that’s what I’m calling the “filter bubble”: that personal ecosystem of information that’s been catered by these algorithms to who they think you are.

It's all terribly complex; it's not precisely clear where the slippery slope began, and how we crossed the line.

But the first step to getting out of the hole is to stop digging.

And to do that, you have to have the discussion; you have to know you're in the hole.

Thank you, Messrs Koster and Marti and Kint and Pariser. It's not a happy story to shout to the world, but you must keep telling it.

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-04

Wed, 2015-03-04 18:58
Categories: FLOSS Project Planets The three open source projects that transformed Hadoop

Wed, 2015-03-04 07:00

Initially, Hadoop implementation required skilled teams of engineers and data scientists, making Hadoop too costly and cumbersome for many organizations. Now, thanks to a number of open source projects, big data analytics with Hadoop has become much more affordable and mainstream. Here's a look at how three open source projects—Hive, Spark, and Presto—have transformed the Hadoop ecosystem.

Categories: FLOSS Project Planets

Justin Mason: Links for 2015-03-03

Tue, 2015-03-03 18:58
Categories: FLOSS Project Planets