Planet Apache

Syndicate content
Updated: 39 min 8 sec ago

Sebastien Goasguen: CloudStack simulator on Docker

Thu, 2014-10-02 05:07

Docker is a lot of fun, one of its strength is in the portability of applications. This gave me the idea to package the CloudStack management server as a docker image.

CloudStack has a simulator that can fake a data center infrastructure. It can be used to test some of the basic functionalities. We use it to run our integration tests, like the smoke tests on TravisCI. The simulator allows us to configure an advanced or basic networking zone with fake hypervisors.

So I bootstrapped the CloudStack management server, configured the Mysql database with an advanced zone and created a docker image with Packer. The resulting image is on DockerHub, and I realized after the fact that four other great minds already did something similar :)

On a machine running docker:

docker pull runseb/cloudstack
docker run -t -i -p 8080:8080 runseb/cloudstack:0.1.4 /bin/bash
# service mysql restart
# cd /opt/cloudstack
# mvn -pl client jetty:run -Dsimulator

Then open your browser on http://<IP_of_docker_host>:8080/client and enjoy !

Categories: FLOSS Project Planets

Bryan Pendleton: This is what the Internet was made for

Wed, 2014-10-01 21:06

Really, if you don't thoroughly enjoy Derek Low's wonderful photo-essay: What It's like to Fly the $23,000 Singapore Airlines Suites Class, you're probably just not Right For The Internet.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-01

Wed, 2014-10-01 18:58
Categories: FLOSS Project Planets

Yoav Shapira: #rtw2012 - Copenhagen

Tue, 2014-09-30 22:04
This is one in a series of posts about my recent round-the-world (RTW) trip, all collected under the #rtw2012 label. You may wish to read them in order for context and background. From Stockholm, I took a quick shuttle flight to Copenhagen, the capital of Denmark.  It's quick and easy flight, landing in a silent airport. What a great innovation that is. The quiet is really nice. Given that a
Categories: FLOSS Project Planets

Nick Kew: Sexist flagbearers hypocrisy revealed

Tue, 2014-09-30 17:02

This evening, the BBC broadcast the results of a short story prize.  I heard some of the stories as they broadcast them last week, and they were indeed good.  I missed the broadcast of the winning story, but I daresay it was well-deserving of its award.

Being the BBC, they didn’t just broadcast the stories and the award ceremony.  They also broadcast a lot of discussion: of the award, the shortlisted candidates, the stories, of the short story form, of what works well with the form, authors and critics anecdotes, etc.

Never once in all that discussion did anyone remark on the fact that it was an all-female shortlist.  Why should they?  There’s nothing remarkable about it: it’s entirely reasonable (and in the long term statistically inevitable) that a fair and impartial shortlist should, from time to time, be all female.

— However —

This is the same BBC who, a couple of years ago, found itself with an all-male shortlist for another award.  I don’t recollect the award itself, just the huge fuss they made of the absence of women on the shortlist.  This is a huge misogynistic scandal, unacceptable sexism.  How was this allowed to happen?  Do heads need to roll?  This must never be allowed to happen again!

Googling suggests the award in question was probably their “sports personality of the year” (for 2011), which would explain why I had no interest in the award itself and heard only the fuss.  The mindless, blatantly sexist fuss, that is now revealed in the full glory of its hypocrisy by the contrast with today’s very civilised short story award.


Categories: FLOSS Project Planets

Shane Curcuru: It was the strangest day this weekend

Tue, 2014-09-30 16:36

The twisted bad dreams certainly didn’t help. They weren’t the usual nightmares, with the obvious fears and the inability to run; they were really twisted combinations of fears. This made for a fine if slightly unsettled morning, meaning I was late as I was…

Driving down Rt. 2, the forest of left-merging brake lights ahead of me made me question my sanity: could it still be Friday? Traffic is only like this at rush hour, and I was pretty sure today was Saturday… Oh, I see. An accident – one that just happened, judging from how quickly the brake lights appeared, and from the wide swath of ex-car parts littering the roadway, and the dazed look as people got out of their now ex-car.

Are we predestined to wonder about fate at moments like that; thinking that if I hadn’t had bad dreams, I might have been on the highway just 30 seconds earlier? Or is it free will that makes us wonder what parts of life we choose to do versus which parts are done to us.

In any case, the day got a lot better as we took a family drive in the beautiful weather, shopping in Nashua. We browsed all the applicable models, checking out each one – it was quite the surreal showroom with all the models standing side by side, in the back room, and even in a basement room. After serious comparisons back and forth, we finally made a decision to buy right then and there.

I bought a piano!

Now while piano purchases aren’t an everyday occurrence, it may not seem that strange – but, oh, it certainly is! I have almost zero musical ability, and while part of my vague “you’ve won a billion dollars” fantasy includes a piano somewhere in the mansion, I certainly can’t play at all, and never could have imagined actually having a piano. But we have a daughter who is pretty good, and more to the point is motivated to practice with the true piano feel (it’s a Yamaha B3), an improvement over the electric keyboard she’s used for years. Of course, now the question is, where the heck will it fit in our small house (and what will the cats think of it?)

In any case, it was quite the strange lunch out afterwards, treating to a Bloomin’ Onion, with the handwritten receipt for a somewhat large purchase price for a piano – a piano!- sitting with us.

In any case, bear with me for a moment, and revel in us enjoying the odd feeling of having just bought a piano: a good day to be sure, but a little strange. We arrive home, and start planning where the piano will go, as we also enjoy our mostly clean living room rug. See, my wife treated me as an early birthday present to a little robotic vacuum, which we had left to run while we were out.

It will be nice to keep the cat hair down with less effort, although it has only done part of the living room so far, it seems, and…

Uh, where is the vacuum?

No, seriously: where is the robot vacuum that was RIGHT HERE WHEN WE LEFT? You did leave it on, right? Yes, it started right there, and it’s not here – not in the kitchen – door is closed, couldn’t have gone down the stairs…

“I for one, welcome our new robotic overlords.”

P.S. it had gotten stuck under the back of the couch, when an errant sheet of tissue paper got folded up over it’s sensors. It’s working fine now, and it’s staying right where it’s put.

I think it might be watching me.

Categories: FLOSS Project Planets

Matt Raible: Developing Services with Apache Camel - Part I: The Inspiration

Tue, 2014-09-30 11:06

In early May, my client asked me to work on a project migrating from IBM Message Broker 6.1 to an open source solution. Their reason was simple, the IBM solution was end of life and outdated. To prove how out of date it was, the Windows version required Windows XP to run. IBM WebSphere Message Broker has been replaced by IBM Integration Bus in recent years, but no upgrade path existed.

At first, I didn't want to do the project. I was hired as a Modern Java/UI Architect and I had enjoyed my first month upgrading libraries, making recommendations and doing a bit of UI performance work. I hadn't done much with ESBs and I enjoy front-end development a lot more than backend. It took me a couple days to realize they were willing to pay me to learn. That's when I decided to clutch up, learn how to do it all, and get the job done. This article is the first in a series on what I learned during this migration project.

My approach for figuring out how everything worked was similar to working on any new application. I get the source code, install the software necessary to run it, and run it locally so I can interact with it.

Installing IBM Message Broker

The hardest part about installing IBM Message Broker 6.1 was getting the bits to do so. I installed Windows XP in a Parallels VM on my Mac, installed Java 7 and started downloading the install files from my client's server. The files consisted of the following, which I used to install the server, WebSphere Message Broker Toolkit (Eclipse-based), and some plugins we were using.

  • IBM_Broker_Install_Disks.zip
  • IBM_plugins.zip
  • IBM_Upgrade_6.1.06.zip
  • MessageBrokerInstallFiles.7z

Transferring these files to my laptop took hours over scp. Installing and getting everything to work correctly took days. Much of this time was spent setting up various ODBC data sources, SMTP servers and figuring out how to run a "message flow test" to verify things were working. Before I started working with my client's project, I read Magnus Palmér's article about testing WMB. It was very helpful and I was able to run his project and its tests against my local server.

Choosing an Open Source Solution

When searching for how one might migrate from IBM Message Broker to an open source solution, I stumbled upon this article:

Use an Open Source Broker
Refer to my report for details on Mule ESB and Fuse ESB. If I had to pick right now, I'd use Fuse, as we already use Apache ActiveMQ and Apache CXF, so why not add another Apache-based product? Fuse also has a much higher install base. Mule seems perfectly acceptable, too, though. I would use the simplified routing scheme described in the section above and deploy either Mule or Fuse onto the same server as PFM. Mule/Fuse do not have dependencies on a queue manager like IBM Message Broker does, so there's less overhead. You can also re-use the XSLs from Message Broker.

I think that if you really understood the IBM Message Broker implementation and were comfortable with Active MQ, SOAP, etc... you could have Mule or Fuse implemented in about 2 weeks.

The last sentence inspired me to take a look at Fuse. I knew James Strachan was behind the project, so I shot him an email in early May:

Hey James,

It's been a while - hope you're doing well. I recently started working for a client that has a ton of legacy frameworks and servers. They basically haven't updated anything for 10 years, since the application was originally created by an outsourced company. They're using Acegi 1.0 for crying out loud!

One of the components they have in their system is IBM Message Broker - or maybe it's called WebSphere ESB - I'm not sure. If I succeed in getting it installed, I'm hoping to somehow install the "services" they have configured, which seem to consist of .bar, .esql and .msgflow files. From what I can tell, these seem to be ESB-related, but they might be proprietary as well. I don't have any experience with ESBs, so I'm pretty much fumbling in the dark.

I found the following information online:

https://github.com/cnaphan/osler-mb/blob/master/README.md#alternative-implementations

Under Alternative Implementations:

<quote>
Use an Open Source Broker
...
I think that if you really understood the IBM Message Broker implementation and were comfortable with Active MQ, SOAP, etc... you could have Mule or Fuse implemented in about 2 weeks.
</quote>

Emphasis of the last sentence is mine. So I'm trying to make this happen and migrate from IBM Message Broker to Fuse. I've downloaded JBoss Fuse 6.1, installed it and got it running.

Do you know of any guides or articles about migrating from IBM Message Broker to Fuse?

Thanks,

Matt

James took a couple days to reply, but when he did it was packed with the advice I was hoping for.

I'll ask around to see. TBH the easiest thing really is to just start using Apache Camel for this kinda stuff; its a simple Java library that you can use in any application server. Then make sure you use the hawtio console (which lets you view/edit/trace/debug camel routes in your browser via a nice HTML5 / AngularJS application). http://hawt.io/

JBoss Fuse 6.1 is cool and all though; its based on OSGi which some folks love and some hate. The class loader / application server doesn't really matter to Camel though; use whatever you want (tomcat / jetty / wildfly / karaf / stand alone java applications etc).

Fuse 6.1 is based on a bunch of apache projects plus an upstream open source project called 'fabric8' which deals with provisioning, management, discovery and so forth (i.e. scaling 1 camel route in 1 JVM to many JVMs etc). http://fabric8.io/

The 1.1.x version of fabric8 supports a "Java Container"; i.e. a static flat classpath. For folks who've not done OSGi before (or folks who hate OSGi) I kinda recommend folks not start using OSGi yet - but just start with camel; otherwise it can seem like too much to learn and the OSGi class loading / metadata stuff can kinda get in the way and slow you down. I did a little demo & blog about using 1.1.0.Beta5 of fabric8 (1.1.x of fabric8 will make it into a future JBoss Fuse product release). http://macstrac.blogspot.co.uk/2014/05/micro-services-with-fabric8.html

TL;DR; - play with Apache Camel, it'll solve easily all your integration needs; its got a much bigger & more active community than Mule and unlike Mule - its all open source; there's no locked down, non-open source connectors/magic/tools or proprietary software included. Once you've got your build & tests working fine; ponder which kinda container you wanna use in production (an app server or micro services) - it doesn't matter too much which one. We're working hard on making all our production/testing management/monitoring/diagnostics/debugging tooling work on all containers/app servers anyway (stand alone Java processes via the Java Container, Docker, OSGI (via Karaf like Fuse 6.1), Tomcat, TomEE and Wildfly).

Though if you need 24x7 production support like now, definitely use the JBoss Fuse 6.1 distro If you can wait until later in the year for production support; I'd stick with the fabric8 1.1.x stuff for Java Containers; its simpler & more agile for you go get things sorted.

Here's a little demo of the UI tooling btw https://vimeo.com/album/2635012/video/84674508

It's using the Fuse distro in the demo; but really the hawtio console is doing all the heavy lifting & is available in any JVM thats got a jolokia agent running (and you can use the hawtio Chrome Plugin too). It hopefully will give you a feel for the UI tooling working with camel routes; debugging/tracing them, grokking their metrics and stuff.

After reading James' email, I forwarded it to my client with my recommendation that we start with Apache Camel. He agreed it was a good approach and I went to work.

Development Strategy

On May 21st, I subscribed to the Apache Camel users mailing list and posted my first question the next day. My development strategy was the following:

  1. Write an integration test against the existing service.
  2. Implement the service with Camel, writing unit tests as I progressed.
  3. Copy the logic from step 1's integration test and use it for the new service's integration tests.

I created the project with a Camel Archetype and used Java 7 as the minimum JDK. I decided I wanted no XML in the project (aside from pom.xml) and that I'd use Camel's Java DSL to develop my routes.

Summary

When I first started migrating from IBM Message Broker to an open source solution, I was a bit overwhelmed with the seemingly daunting task. I quickly discovered that Apache Camel was a good replacement and started developing my first route. It took me a couple days to get things working, but I learned a lot in the process - especially around testing. In the next article, I'll talk about how I wrote tests, mocked 3rd party dependencies and configured everything to run in Jenkins. Stay tuned!

Related: I first learned about Apache Camel from Bruce Snyder in 2008 at Colorado Software Summit.

Categories: FLOSS Project Planets

Ian Boston: AppleRAID low level fix

Tue, 2014-09-30 08:29

Anyone who uses AppleRAID will know how often it declares that a perfectly healthy disk is no longer a valid member of a Raid set. What you may not have experienced is when it wont rebuild. For a stripped set, the practical only solution is a backup. For a mirror there are some things you can do. Typically when diskutil or the GUI wont repair the low level AppleRAID.kext wont load, or will load and fails reporting it cant get a controller object. In the logs you might also see the Raid set is degraded or just offline. If its really bad DiskUtility and diskutil will hang somewhere in the kernel, and you wont be able to get a clean reboot.

Here is one way to fix:

Unplug the disk subsystem causing the problem.

Reboot, you may have to pull the plug to get shutdown.

Once up, move the AppleRAID.kext into a safe place eg

mkdir ~/kext sudo mv /System/Library/Extensions/AppleRAID.kext ~/kext

Watch the logs to see that kextcache has rebuilt the cache of kernel extensions. You should see something like

30/09/2014 13:21:37.518 com.apple.kextcache[456]: /: helper partitions appear up to date.

When you see that you know that if you plugin the RAID Subsystem the kernel wont be able to load the AppleRAID.kext and so you will be able to manipulate the disks.

Plugin the raid subsystem and check that it didnt load the kernel extension,

kextstat | grep AppleRAID

You will now be able to do diskutil list and you should see your disks listed as Apple RAID disks, eg

$ diskutil list ... /dev/disk2 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *750.2 GB disk2 1: EFI 209.7 MB disk2s1 2: Apple_RAID 749.8 GB disk2s2 3: Apple_Boot Boot OS X 134.2 MB disk2s3 /dev/disk3 #: TYPE NAME SIZE IDENTIFIER 0: GUID_partition_scheme *750.2 GB disk3 1: EFI 209.7 MB disk3s1 2: Apple_RAID 749.8 GB disk3s2 3: Apple_Boot Boot OS X 134.2 MB disk3s3

At this point the disks are just plain disks. The AppleRAID kernel extension isn’t managing the disks. Verify with

$ diskutil appleRAID list No AppleRAID sets found $

Since you cant use them as RAID any more, and so cant use the diskutil appleRAID delete command convert the RAID set into normal disks you have to trick OSX into mounting the disks. To do this you need to edit the partition table, without touching the data on the disk. You can do this with gpt

$ sudo gpt show disk2 start size index contents 0 1 PMBR 1 1 Pri GPT header 2 32 Pri GPT table 34 6 40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B 409640 1464471472 2 GPT part - 52414944-0000-11AA-AA11-00306543ECAC 1464881112 262144 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC 1465143256 7 1465143263 32 Sec GPT table 1465143295 1 Sec GPT header $ sudo gpt show disk3 start size index contents 0 1 PMBR 1 1 Pri GPT header 2 32 Pri GPT table 34 6 40 409600 1 GPT part - C12A7328-F81F-11D2-BA4B-00A0C93EC93B 409640 1464471472 2 GPT part - 52414944-0000-11AA-AA11-00306543ECAC 1464881112 262144 3 GPT part - 426F6F74-0000-11AA-AA11-00306543ECAC 1465143256 7 1465143263 32 Sec GPT table 1465143295 1 Sec GPT header $

According to https://developer.apple.com/library/mac/technotes/tn2166/_index.html the partition in index 2 with a partition type of 52414944-0000-11AA-AA11-00306543ECAC is a Apple_RAID partition. Its actually HFS+ with some other settings. Those settings get removed when converting it form RAID to non RAID, but to get it mounted we can just change the partition type. First delete the entry from the partion table, then recreated it with the HFS+ type exactly the same size.

$ gpt remove -i 2 disk2 disk2s2 removed $ gpt add -b 409640 -s 1464471472 -t 48465300-0000-11AA-AA11-00306543ECAC disk3 disk2s2 added

OSX will mount the disk. It will probably tell you that its been mounted read only, and cant be repaired. At the point you need to copy all the data off onto a clean disk, using rsync.

Once that is done you can do the same with the second disk and compare the differences between both your RAID members. When you have all the data back, you can consider if you leave the AppleRAID.kext disabled or use it again. I know what I will be doing.

Categories: FLOSS Project Planets

Sebastien Goasguen: On Docker and Kubernetes on CloudStack

Tue, 2014-09-30 04:18
On Docker and Kubernetes on CloudStack

Docker has pushed containers to a new level, making it extremely easy to package and deploy applications within containers. Containers are not new, with Solaris containers and OpenVZ among several containers technologies going back 2005. But Docker has caught on quickly as mentioned by @adrianco. The startup speed is not surprising for containers, the portability is reminiscent of the Java goal to "write once run anywhere". What is truly interesting with Docker is that availability of Docker registries (e.g Docker Hub) to share containers and the potential to change the application deployment workflows.

Rightly so, we should soon see IT move to a docker based application deployment, where developers package their applications and make them available to Ops. Very much like we have been using war files. Embracing a DevOps mindset/culture should be easier with Docker. Where it becomes truly interesting is when we start thinking about an infrastructure whose sole purpose is to run containers. We can envision a bare operating system with a single goal to manage docker based services. This should make sys admin life easier.

The role of the Cloud with Docker

While the buzz around Docker has been truly amazing and a community has grown over night, some may think that this signals the end of the cloud. I think it is far from the truth as Docker may indeed become the killer app of the cloud.

A IaaS layer is what is: an infrastructure orchestration layer, while Docker and its ecosystem will become the application orchestration layer.

The question then becomes: How do I run Docker in the cloud ? And there is a straightforward answer: Just install Docker in your cloud templates. Whether on AWS or GCE or Azure or your private cloud, you can prepare linux based templates that provide Docker support. If you are aiming for the bare operating system whose sole purpose is to run Docker then the new CoreOS linux distribution might be your best pick. CoreOS provides rolling upgrades of the kernel, systemd based services, a distributed key value store (i.e etcd) and a distributed service scheduling system (i.e fleet)

exoscale an Apache CloudStack based public clouds, recently announced the availability of CoreOS templates.

Like exoscale, any cloud provider be it public or private can make CoreOS templates available. Providing Docker within the cloud instantly.

Docker application orchestration, here comes Kubernetes

Running one container is easy, but running multiple coordinated containers across a cluster of machines is not yet a solved problem. If you think of an application as a set of containers, starting these on multiple hosts, replicating some of them, accessing distributed databases, providing proxy services and fault tolerance is the true challenge.

However, Google came to the resuce and announced Kubernetes a system that solves these issues and makes managing scalable, fault-tolerant container based apps doable :)

Kubernetes is of course available on Google public cloud GCE, but also in Rackspace, Digital Ocean and Azure. It can also be deployed on CoreOS easily.

Kubernetes on CloudStack

Kubernetes is under heavy development, you can test it with Vagrant. Under the hood, aside from the go code :), most of the deployment solutions use SaltStack recipes but if you are a Chef, Puppet or Ansible user I am sure we will see recipes for those configuration management solution soon.

But you surely got the same idea that I had :) Since Kubernetes can be deployed on CoreOS and that CoreOS is available on exoscale. We are just a breath away from running Kubernetes on CloudStack.

It took a little more than a breath, but you can clone kubernetes-exoscale and you will get running in 10 minutes. With a 3 node etcd cluster and a 5 node kubernetes cluster, running a Flannel overlay.

CloudStack supports EC2 like userdata, and the CoreOS templates on exoscale support cloud-init, hence passing some cloud-config files to the instance deployment was straightforward. I used libcloud to provision all the nodes, created proper security groups to let the Kubernetes nodes access the etcd cluster and let the Kubernetes nodes talk to each other, especially to open a UDP port to build a networking overlay with Flannel. Fleet is used to launch all the Kubernetes services. Try it out.

Conclusions.

Docker is extremely easy to use and taking advantage of a cloud you can get started quickly. CoreOS will put your docker work on steroid with availability to start apps as systemd services over a distributed cluster. Kubernetes will up that by giving you replication of your containers and proxy services for free (time).

We might see pure docker based public clouds (e.g think Mesos cluster with a Kubernetes framework). These will look much more like PaaS, especially if they integrate a Docker registry and a way to automatically build docker images (e.g think continuous deployment pipeline).

But a "true" IaaS is actually very complimentary, providing multi-tenancy, higher security as well as multiple OS templates. So treating docker as a standard cloud workload is not a bad idea. With three dominant public clouds in AWS, GCE and Azure and a multitude of "regional" ones like exoscale it appears that building a virtualization based cloud is a solved problem (at least with Apache CloudStack :)).

So instead of talking about cloudifying your application, maybe you should start thinking about Dockerizing your applications and letting them loose on CloudStack.

Categories: FLOSS Project Planets

Bryan Pendleton: A fairly random collection of links on Merkle Trees

Mon, 2014-09-29 23:18

Just sitting around, hanging out, musing about Merkle Trees...

  • Merkle treeHash trees can be used to verify any kind of data stored, handled and transferred in and between computers. Currently the main use of hash trees is to make sure that data blocks received from other peers in a peer-to-peer network are received undamaged and unaltered, and even to check that the other peers do not lie and send fake blocks.
  • A Certified Digital SignatureThe method is called tree authentication because the computation of H(1,n,Y) forms a binary tree of recursive calls. Authenticating a particular leaf Y(i) in the tree requires only those values of H() starting from the leaf and progressing to the root, i.e., from H(i,i,Y) to H(1,n,Y).
  • Recent Improvements in the Efficient Use of Merkle Trees: Additional Options for the Long Term Fractal Merkle Tree Representation and Traversal, shows how to modify Merkle’s scheduling algorithm to achieve a space-time trade-off. This paper was presented at the Cryptographer’s Track, RSA Conference 2003 (May 2003). This construction roughly speeds up the signing operation inherent in Merkle’s algorithm by an arbitrary factor of T, (less than H), at a cost of requiring more space: (2^T times the space).
  • Merkle Signature Schemes, Merkle Trees and Their CryptanalysisThe big advantage of the Merkle Signature Scheme is, that the security does not rely on the difficulty of any mathematic problem. The security of the Merkle Signature Scheme depends on the availability of a secure hash function and a secure one-time digital signature. Even if a one-time signature or a hash function becomes insecure, it can be easily exchanged. This makes it very likely that the Merkle Signature Scheme stays secure even if the conventional signature schemes become insecure.
  • Caches and Merkle Trees for Efficient Memory AuthenticationOur work addresses the issues in implementing hash tree machinery in hardware and integrating this machinery with an on-chip cache to reduce the log N memory bandwidth overhead.
  • Protocol specificationMerkle trees are binary trees of hashes. Merkle trees in bitcoin use a double SHA-256, the SHA-256 hash of the SHA-256 hash of something.

    If, when forming a row in the tree (other than the root of the tree), it would have an odd number of elements, the final double-hash is duplicated to ensure that the row has an even number of hashes.

    First form the bottom row of the tree with the ordered double-SHA-256 hashes of the byte streams of the transactions in the block.

    Then the row above it consists of half that number of hashes. Each entry is the double-SHA-256 of the 64-byte concatenation of the corresponding two hashes below it in the tree.

    This procedure repeats recursively until we reach a row consisting of just a single double-hash. This is the Merkle root of the tree.

  • Amazon's Dynamo Merkle trees help in reducing the amount of data that needs to be transferred while checking for inconsistencies among replicas. For instance, if the hash values of the root of two trees are equal, then the values of the leaf nodes in the tree are equal and the nodes require no synchronization. If not, it implies that the values of some replicas are different. In such cases, the nodes may exchange the hash values of children and the process continues until it reaches the leaves of the trees, at which point the hosts can identify the keys that are “out of sync”. Merkle trees minimize the amount of data that needs to be transferred for synchronization and reduce the number of disk reads performed during the anti-entropy process.
  • Cassandra: Using Merkle trees to detect inconsistencies in data A repair coordinator node requests Merkle tree from each replica for a specific token range to compare them. Each replica builds a Merkle tree by scanning the data stored locally in the requested token range. The repair coordinator node compares the Merkle trees and finds all the sub token ranges that differ between the replicas and repairs data in those ranges.
  • The Dangers of Rebasing A BranchPersonally, having studied Merkle Trees and discussed a possible use-case for using git/Merkle Trees as a caching solution, I view git as a entirely immutable structure of your code. Rebases break this immutability of commits.
  • Sigh. "grow-only", "rebase is dangerous", "detached head state is dangerous". STOP. Stop it now.git is a bag of commits organized into a tree (a tree of Merkle hash chains). Branches and tags are symbolic names for these. Think of it this way and there's no danger.

    ...

    I didn't need a local branch crutch to find my way around because I know the model: a tree of commits.

    Understanding the model is the key.

    There are other VCSes that also use Merkle hash trees. Internally they have the power that git has.

  • Google's end-to-end key distribution proposal Smells like a mixed blockchain/git type approach - which is a good thing. The "super-compressed" version of the log tip sounds like git revision hash. The append-only, globally distributed log is pretty much like a blockchain.
  • Ticking time bombGiven only one verified hash in such a system, no part of the data, nor its history of mutation can be forged. "History" can mean which software runs on your computer (TPM), which transactions are valid (Bitcoin), or which commits have been done in a SCM (git, mercurial).

    So git is not magical, it is just a practical implementation of something that works. Any other *general* solution will be based on similar basic principles. Mercurial does this and there is a GPG extension for it.

Wow, that's a lot.

I shall have to find more time to read...

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-09-29

Mon, 2014-09-29 18:58
  • Prototype

    Prototype is a brand new festival of play and interaction. This is your chance to experience the world from a new perspective with removable camera eyes, to jostle and joust to a Bach soundtrack whilst trying to disarm an opponent, to throw shapes as you figure out who got an invite to the silent disco, to duel with foam pool noodles, and play chase in the dark with flashlights. A unique festival that incites new types of social interaction, involving technology and the city, Prototype is a series of performances, workshops, talks, and games that spill across the city, alongside an adult playground in the heart of Temple Bar. Project Arts Centre, 17-18 October. looks nifty

    (tags: prototype festivals dublin technology make vr gaming)

  • Confessions of a former internet troll – Vox

    I want to tell you about when violent campaigns against harmless bloggers weren’t any halfway decent troll’s idea of a good time — even the then-malicious would’ve found it too easy to be fun. When the punches went up, not down. Before the best players quit or went criminal or were changed by too long a time being angry. When there was cruelty, yes, and palpable strains of sexism and racism and every kind of phobia, sure, but when these things had the character of adolescents pushing the boundaries of cheap shock, disagreeable like that but not criminal. Not because that time was defensible — it wasn’t, not really — but because it was calmer and the rage wasn’t there yet. Because trolling still meant getting a rise for a laugh, not making helpless people fear for their lives because they’re threatening some Redditor’s self-proclaimed monopoly on reason. I want to tell you about it because I want to make sense of how it is now and why it changed.

    (tags: vox trolls blogging gamergate 4chan weev history teenagers)

Categories: FLOSS Project Planets

Jim Jagielski: Shellshock: No, it IS a bash bug

Mon, 2014-09-29 11:32

Reading over http://paste.lisp.org/display/143864, I am surprised just how wrong the entire post is.

The gist of the post is that the Shellshock bug is not bash's fault, but rather, in this argument, the fault of Apache and other facing programs in not "sanitizing" the environment before it gets into bash's hands.

Sweet Sassy Molassy! What kind of horse-sh*t is that?

As "proof" of this argument, pjb uses the tired old excuse: "It's not a bug, it's a feature", noting that bash's execution of commands "hidden" in environment variables is documented; But then we get the best line of all:

The implementation detail of using an environment variable whose value starts with "() {" and which may contain further commands after the function definition is not documented, but could still be considered a feature 

As far as outlandish statements, this one takes the cake. Somehow, Apache and other programs should sanitize magical, undocumented features and their failure to do so is the problem, not that this magic is undocumented as well as fraught with issues in and of itself.

Let's recall that if any other Bourne-type shell, or, in fact, any real POSIX-compliant shell (which bash claims to be), were being used in the exact situation that bash was being used, there would be no vulnerability. None. Nada. Zero. Replace with ksh, zsh, dash, ... and you'd be perfectly fine. No vulnerability and CGI would work just fine. And also let's recall, again focusing on Apache (and all web servers, in fact; It's not just Apache is affected by this vulnerability but any web server, such as nginx, etc...), the CGI specification specifically makes it clear that environment variables are exactly where the parameters of the client's request lives.

Also, let's consider this: A shell is where the unwashed public interfaces with the OS. If there is ANY place where you don't want undocumented magic, especially in executing random code in an undocumented fashion, it AIN'T the shell. And finally, the default shell is also run by the start-up scripts themselves, again meaning that you want that shell to have as few undocumented bugs... *cough* *cough*, sorry "features" as possible, and certainly not one's that could possible run things behind your back.

Yes, this bug, this vulnerability is certainly bash's, no doubt at all. But it also goes without saying that if bash was not the default shell (/bin/sh) on Linux and OSX, that this would have been a weaker vulnerability. Maybe that was, and is, the main takeaway here. Maybe it is time for the default shell on Linux to "return" to the old Bourne shell or, at least, dash.

Categories: FLOSS Project Planets

Jim Jagielski: Shellshock

Mon, 2014-09-29 05:31

UPDATED: Sept 29, 2014 with current OSX Bash patch

First of all, when this was first found, we were looking for a cool name... It was found.

Anyway, as noted here [https://securityblog.redhat.com/2014/09/24/bash-specially-crafted-environment-variables-code-injection-attack/]  the shellshock vulnerability is pretty nasty. What's interesting is that, in general, the *BSD variants aren't as vulnerable as *NIX platforms, simply because the default shell on BSD is still the Bourne shell (the "real" sh) and not Bash itself (On Linux and OSX, for example, /bin/sh is either a copy or link to /bin/bash).

Even so, BSD systems are not immune by any stretch of the imagination, since one attack vector is via web-servers and CGI, and it's likely that there are numerous CGI scripts that require/use Bash. So no matter what, patch your systems.


Continue reading "Shellshock"
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-09-27

Sat, 2014-09-27 18:58
  • La Maison des Amis

    Paul Hickey’s gite near Toulouse, available for rent! ‘a beautifully converted barn on 5 acres, wonderfully located in the French countryside. 4 Bedrooms, sleeps 2-10, Large Pool, Tennis Court, Large Trampoline, Broadband Internet, 30 Mins Toulouse/Albi, 65 Mins Carcassonne, 90 Mins Rodez’

    (tags: ex-iona gites france holidays vacation-rentals vacation hotels)

  • waxpancake on Ello

    The Ello founders are positioning it as an alternative to other social networks — they won’t sell your data or show you ads. “You are not the product.” If they were independently-funded and run as some sort of co-op, bootstrapped until profitable, maybe that’s plausible. Hard, but possible. But VCs don’t give money out of goodwill, and taking VC funding — even seed funding — creates outside pressures that shape the inevitable direction of a company.

    (tags: advertising money vc ello waxy funding series-a)

  • Inviso: Visualizing Hadoop Performance

    With the increasing size and complexity of Hadoop deployments, being able to locate and understand performance is key to running an efficient platform.  Inviso provides a convenient view of the inner workings of jobs and platform.  By simply overlaying a new view on existing infrastructure, Inviso can operate inside any Hadoop environment with a small footprint and provide easy access and insight.   This sounds pretty useful.

    (tags: inviso netflix hadoop emr performance ops tools)

  • The End of Linux

    ‘Linux is becoming the thing that we adopted Linux to get away from.’ Great post on the horrible complexity of systemd. It reminds me of nothing more than mid-90s AIX, which I had the displeasure of opsing for a while — the Linux distros have taken a very wrong turn here.

    (tags: linux unix complexity compatibility ops rant systemd bloat aix)

Categories: FLOSS Project Planets

Nick Kew: Forever war

Sat, 2014-09-27 18:17

Once again, we’re going to war against an ill-defined enemy.  But this time it’s clear: this is the enemy’s own agenda, and our Headless Chickens are merrily dancing to “Jihadi John”‘s tune.  As ever, we’ll take a bad situation and make it vastly worse.

When it’s demagogues like Galloway and Farage consistently talking the most sense on the subject of policy towards the world’s trouble spots, one can but shake the head and redouble one’s efforts to reduce complicity.

Oh, erm, and am I the only one to see the irony in all the Islamic State horror coming in this centenary year of 1914, as we look back at “Germans eat your babies”?


Categories: FLOSS Project Planets

Bruce Snyder: I Can Sqeeze My Butt! :: Bruce Snyder's Status

Sat, 2014-09-27 14:43
Two weeks ago I awoke to the discovery that I can squeeze my butt again! Those of you who read my last blog post know that I have paralysis across my butt and down the outsides of my hamstrings and that in that post I said: 'Even if the movement of my feet does not return, I really wish that I could regain the feeling in my butt and the ability to squeeze the muscles so that I could build them back up again.'Well believe it or not, I got my wish! I could hardly believe it myself! I was still lying in bed on Sunday morning when I made the discovery. I was in such disbelief that I laughed out loud and woke up Janene. As she heard me she bolted upright, bleary eyed and said, 'Are you OK?' Still laughing I told her I can squeeze my butt and we both could hardly believe it. Even though it was a very small squeeze, it's a sign that the healing is starting to take place. 
Technically the muscles in the butt are the gluteal muscle group as shown in the diagram comprised of the glueus maximus, gluteus medius and the gluteus minimus. The ability to squeeze these muscles is controlled by nerves that connect impulses sent from the brain, down the spinal column to the muscle to cause a contraction. The fact that the nerves have healed enough to allow me to squeeze them is a really good sign, it means that my body is healing itself. 
The squeeze was very small and quite weak but it was a start. Because these muscles have basically been dormant for five months means that they are terribly atrophied and therefore extremely weak. But even in the two weeks since this movement returned, I have been working the muscles to build them back up and the squeeze has only increased. At this point, it's not a huge increase, but as my Mom always told me growing up, 'Slow and steady wins the race.' 
Who thought I would be so happy for such a minor thing. But when I experienced such a devastating injury that forever changed my life, I learned very quickly to be happy for what I still have, as I have mentioned before. Now it's just a matter of working these muscles regularly via rehab to bring them back to life. Speaking of rehab, I also made a big change on this front last week. Changing My Rehab Since being released from Craig Hospital in June, I have been going back to Craig for rehab. After all, it is a world-renowned hospital for spinal cord and brain injuries. When I was first released from the hospital, my Dad was still in town and was driving me wherever I needed to go including to rehab at Craig Hospital. At first, I was going to rehab at Craig three times a week. It helped a lot to be in close contact with my physical therapist and to continue seeing my friends there. But it didn't take long for me to really get tired of making the 90 mile round trip and sitting in traffic for 2.5-3+ hours each time we made the trip. Remember, this was when I was still exhausted all the time and this drive only made things worse for me. I also got wise to the fact that insurance companies only pay for a certain number of visits. So I decided to keep doing my rehab at home and only check in with my PT at Craig once a week to more or less maximize my PT visits. For a while this worked, but because I am now back to work full-time, even making the trip to Craig once a week sunk a lot of time and I didn't get a lot of benefit from a one hour appointment once a week. So I began looking into other options including the Boulder Community Health's Mapleton Center for Outpatient Rehabilitation and also a company what specializes in spinal cord injury (SCI) rehab named Project Walk
Project Walk was especially compelling to me because it focuses on rebuilding the muscle mass that SCI patients lose from the injury and hospitalization. The professionals at Project Walk help patients to design a workout specifically for them and their situation to focus on their own goals. My ultimate goal is to walk again without the need for braces and crutches, and although this is dependent upon my body and its ability to heal, there's a lot that can be done in the meantime to get my body ready for more movement to return. I applied to Project Walk and received a call back within a day and began talking to them. Everything sounded great and was very much in tune with the way that I have always enjoyed pushing myself in my physical fitness, but there was one catch -- they wanted me to come to their San Diego office for three weeks. The problem with this is that I am just too busy at work right now with recruitment duties for open positions and I don't feel like I can put this on hold for three weeks. Because of this, I decided to look into a more local solution in Boulder for now. 
Boulder Community Health has an outpatient rehabilitation clinic called the Mapleton Center. A dear friend of mine who experienced a spinal injury a couple years ago went here for his rehab and told me that they really helped him. So I paid them a visit and got an evaluation by a PT who worked at the Spaulding Institute in Boston prior to coming to the Mapleton Center. Spaulding is a rehabilitation clinic out east that is well-known for its SCI program. So this week I began doing rehab at the Mapleton Center to see if this PT can help get me to get on the road to a more rigorous workout that will help me work toward my goals. This certainly doesn't mean that I have ruled out Project Walk, in fact, it is still very much on my mind. 
In speaking to Project Walk, I have learned that this place is a premier rehab clinic for SCI patients. Based on 10 years of medical research and partnering with hospitals and universities, Project Walk is like not other rehab clinic I have discovered. And although they originally wanted me to come out there for three weeks, in speaking with them they suggested that perhaps we could condense it to a week and just work a lot more hours while I'm there. Furthermore, I also learned that they are opening a clinic in the Boulder/Denver area in March 2015. So I'm kinda thinking that I need to see how things play out at the Mapleton Center before traveling to Project Walk in San Diego. If I can attend PT in Boulder for a while and then go to Project Walk in San Diego, perhaps I can be ready to take on even more when the Project Walk clinic opens here in the Boulder/Denver area. Dinner With Gareth and Mike This past week I had dinner with my coworker Mike O'Donnell and his buddy Gareth who helped me as I laid suffering in the street right after the accident. Not only was it was wonderful to see Gareth again, but this time with a clear head, it was also great to have dinner with my co-worker Mike who I really like. I learned a lot about both Gareth and Mike that night and I really enjoyed our time together. Spending some time with Gareth in a social setting really clued me in to who he is and I discovered that we have a lot in common in terms of the way we look at the world. Gareth also told me about a fascinating book that I'm just beginning to read now. BooksAnyone who knows me knows that I'm constantly reading something. I'm always on the look out for new books to read and, in fact, I even keep a list of books in a notebook in Evernote (which I use for everything now). The book Gareth told me about is titled, Biology of Belief. This book is about how new research shows that our DNA does not control our biology, instead our DNA is controlled by our positive and negative thoughts. Certainly this topic is of extreme interest to me right now because of my medical situation. I don't have much to say about this topic yet because I haven't read the book yet. But suffice it to say that I am reading and trying everything I can get my hands on at this point to help heal myself. 
When I told Janene about this book, she said it sounded similar to one recommended recently by a co-worker titled You Are the Placebo. This book is about how one's brain and body are shaped by their thoughts, their emotions and their intentions, not the other way around. Again, a captivating topic for me right now so I plan to read this book next. 
Perhaps these two books will help me move from the hope of more movement to the real belief that I am going to get movement back and I am going to walk one day. After all, I did tell Project Walk that my goal is to walk, but my dream is to one day cycle and run again. 
Categories: FLOSS Project Planets

Bryan Pendleton: What I'm reading, Aurora ATC edition

Sat, 2014-09-27 12:17

Amazingly, some of our relatives in Illinois, even those living in Downers Grove, hadn't heard about the events in Aurora. I think it's one of those bizarre events that affected a handful of people very significantly, some of those people many thousands of miles away from the incident, but didn't affect others even one whit.

Anyway...

  • FAA contractor charged with fire that halted Chicago flights"This is a nightmare scenario when we thought systems were in place to prevent it," said aviation analyst Joseph Schwieterman of DePaul University in Chicago. "Technology is advancing so fast that ... there's less of a need for air traffic control to be so geographically oriented. I think the FAA's going to find itself under a microscope."
  • Everything you need to know about the Shellshock Bash bugThe risk centres around the ability to arbitrarily define environment variables within a Bash shell which specify a function definition. The trouble begins when Bash continues to process shell commands after the function definition resulting in what we’d classify as a “code injection attack”.
  • MySQL 5.7.5: GROUP BY respects functional dependencies!Most RDBMS-es implement the SQL92 behavior, and generate an error in case a non-aggregate expression appears in the SELECT-list but not the GROUP BY-clause. Because MySQL would not generate an error at all and instead would simply allow such queries while silently producing a non-deterministic result for such expressions, many users got bitten and frustrated.
  • Scaling NoSQL databases: 5 tips for increasing performanceVirtualization can be great. It provides the flexibility to use a single machine for multiple purposes, with reasonable security for non-critical business data. Unfortunately, it also influences memory access speed, which is critical for some NoSQL databases. Depending on the hypervisor and the underlying hardware capabilities, it can add 20-200% penalty in accessing memory for NoSQL workloads. I have witnessed this in testing, and it is also documented by a number of industry benchmarks. Newer generation hardware helps with better support for hardware assisted memory management unit (MMU), but there is still a significant impact for workloads that generate a lot of Translation Lookaside Buffer (TLB) misses (as NoSQL databases are wont to do).
  • Why Scrum Should Basically Just Die In A FireOf course, musing, considering, mulling things over, and coming to realizations all constitute a significant amount of the actual work in programming. It is impossible to track whether these realizations occur in the office or in the shower. Anecdotally, it's usually the shower. Story points, meanwhile, are completely made-up numbers designed to capture off-the-cuff estimates of relative difficulty. Developers are explicitly encouraged to think of story points as non-binding numbers, yet velocity turns those non-binding estimates into a number they can be held accountable for, and which managers often treat as a synonym for productivity. "Agile" software exists to track velocity, as if it were a meaningful metric, and to compare the relative velocity of different teams within the same organization.

    This is an actual thing which sober adults do, on purpose, for a living.

    "Velocity" is really too stupid to examine in much further detail, because it very obviously disregards this whole notion of "working software as a measure of progress" in favor of completely unreliable numbers based on almost nothing.

    ...

    Sacrificing "working software as a measure of progress" to meaningless numbers that your MBAs can track for no good reason is a pretty serious flaw in Scrum. It implies that Scrum's loyalty is not to the Agile Manifesto, nor to working software, nor high-quality software, nor even the success of the overall team or organization. Scrum's loyalty, at least as it pertains to this design decision, is to MBAs who want to point at numbers on a chart, whether those numbers mean anything or not.

  • Relativistic hash tables, part 1: AlgorithmsOne might wonder whether the resizing of hash tables is common enough to be worth optimizing. As it turns out, picking the correct size for a hash table is not easy; the kernel has many tables whose size is determined at system initialization time with a combination of heuristics and simple guesswork. But even if the initial guess is perfect, workloads can vary over time. A table that was optimally sized may, after a change, end up too small (and thus perform badly) or too big (wasting memory). Resizing the table would fix these problems, but, since that is hard to do without blocking access to the table, it tends not to happen. The longer-term performance gains are just not seen to be worth the short-term latency caused by shutting down access to the table while it is resized.
  • How to Squeeze a Huge Ship Down a Tiny RiverEverything aligned on Monday, and the six captains set to work in the afternoon. Given the intense concentration needed to do the job, they worked in pairs for 90 minute shifts. One captain steered the bow, the other guided the stern. The unusual maneuvering system helps a ship this big precisely navigate tight turns and narrow squeezes, much like a tiller driver helps the driver of a hook-and-ladder firetruck navigate city streets.

    Spectators lining the river could be forgiven for thinking Quantum was headed upriver, given that it went downriver backward. Using the propellers to pull from the front offers better control than pushing from the back (the same is true for front wheel-drive cars). Tug boats, attached directly, rather than by a cable, to the bow and stern of the ship, provided extra control.

  • File systems, Data Loss and ZFSZFS is able to protect all of these things between the disks and system memory, with each change protected after an IO operation (e.g. a write) returns from the kernel. This protection is enabled by ZFS’ disk format, which places all data into a Merkle tree that stores 256-bit checksums and is changed atomically via a two-stage transaction commit. No other major filesystem provides such guarantees.

    However, that is not to say that ZFS cannot lose data.

  • Another Patent Troll Slain. You Are Now Free To Rotate Your Smartphone. Rotatable owned a patent that it claimed covers the screen rotation technology that comes standard in just about every smartphone. You know, when you flip your device sideways and the screen shifts orientation from portrait mode to landscape mode? Like nearly all the apps in the Apple and Android app stores, Rackspace uses standard functionality provided by Apple’s libraries and Android open source software to provide this display feature in our mobile cloud applications.
  • The Dropbox terabyte conundrumWhen you move items to Transporter Library, they are moved off of your device and into the Transporter’s personal cloud. You are offloading files from your system, for hosting elsewhere. Accessing them is slow, of course, because they’re not actually on your device. But they’re also not taking up space on that tiny solid-state drive of yours.

    It’s a clever approach, and one that I hope Dropbox adopts—but I’m a little concerned that Dropbox is so committed to its metaphor that it won’t want to complicate it like this. Allowing direct disk-like access to Dropbox is very different than syncing files, so it might require major changes to Dropbox’s infrastructure. I can see how it might not be a development priority.

  • Liverpool beat Middlesbrough after 30-penalty Capital One Cup shoot-outIt was the longest penalty shoot-out in the history of the League Cup, the previous record set at 9-8 on three occasions, and more extensive than the FA Cup’s highest total when Macclesfield beat Forest Green 11-10 in 2001. Of major English competitions only the Football League Trophy can equal it, also boasting a 14-13 shoot-out.

In other news, I made Hacker News!

For an essay I wrote last January.

But it was cool nonetheless.

And thanks, Kale, for including me in your newsletter!

Categories: FLOSS Project Planets

Nick Kew: Defending against shell shock

Fri, 2014-09-26 14:42

I started writing a longer post about the so-called shell shock, with analysis of what makes a web server vulnerable or secure.  Or, strictly speaking, not a webserver, but a platform an attacker might access through a web server.  But I’m not sure when I’ll find time to do justice to that, so here’s the short announcement:

I’ve updated mod_taint to offer an ultra-simple defence against the risk of shell shock attacks coming through Apache HTTPD, versions 2.2 or later.  A new simplified configuration option is provided specifically for this problem:

LoadModule taint_module modules/mod_taint.so Untaint shellshock

mod_taint source and documentation are at http://people.apache.org/~niq/mod_taint.c and http://people.apache.org/~niq/mod_taint.html respectively.

Here’s some detail from what I posted earlier to the Apache mailinglists:

Untaint works in a directory context, so can be selectively enabled for potentially-vulnerable apps such as those involving CGI, SSI, ExtFilter, or (other) scripts.

This goes through all Request headers, any PATH_INFO and QUERY_STRING, and (just to be paranoid) any other subprocess environment variables. It untaints them against a regexp that checks for “()” at the beginning of a variable, and returns an HTTP 400 error (Bad Request) if found.

Feedback welcome, indeed solicited. I believe this is a simple but sensible approach to protecting potentially-vulnerable systems, but I’m open to contrary views. The exact details, including the shellshock regexp itself, could probably use some refinement. And of course, bug reports!


Categories: FLOSS Project Planets

Bryan Pendleton: Well, that was a bust.

Fri, 2014-09-26 12:19

Surely we weren't the only people affected by today's air traffic control snafu.

Our 6 day trip to Michigan won't happen, sadly; we declined the suggestion that we fly to "Baltimore or maybe Saint Louis, and then drive from there."

Instead, we'll plan another trip, perhaps this winter (we're already planning to go for Thanksgiving, too.)

The world can throw you a curveball, you just have to deal with it.

Categories: FLOSS Project Planets