Based on this idea, I created a Wunderbar jquery filter to “desugar” Wunderbar calls into JQuery calls. The tests show some of the conversions. I also updated my Bootstrap modal dialog directive to make use of this: before => after.
Mark Miller: I really think SolrCloud in Solr 4.6.1 has reached a milestone in terms of stability. Hardening a distributed...
Hardening SolrCloud is the hardest thing I have ever faced, and honestly, I have spent just crazy amounts of time on it. My work on Lucene, Solr, and SolrCloud is really the largest thing of consequence I have dropped into the world and I would be crushed if each did not see some level of success in the coming years. This is where I am putting my 10,000 hours of engaged work.
There is a light at the end of the tunnel though. As Solr users have started migrating to SolrCloud in mass, feedback and bug reports have surged. Teams are banging on the system and reporting back detailed reports around where things went wrong. This is the life blood of a hardened system. Lots of users, lots of environments, lots of testing, lots of feedback.
SolrCloud had a large early advantage in user base because the Solr community was so strong and so large - but in the early days, of course, it mainly was used by early adopters and there are never as many of those as you would like. That was a critical phase, and many of those early adopters were invaluable. Recently, we have moved beyond that though. People picking up Solr are now starting with SolrCloud. Many older Solr users are migrating when they update to recent versions. Use is jumping drastically, and the results of this use on the code are helping me to smile again.
It's hard to be happy in the early days of a distributed system. All you see are the known flaws, the things you meant to get to, the tests you know should have been written but have not been yet. Building a distributed system is just a lot of grunt work - a marathon of pushing boulders up a hill. You spend more time bug fixing and debugging than you do coding. Now that SolrCloud is starting to transition beyond that phase, it's easier to start feeling less critical. No doubt there is a lot to do, but the underpinnings are starting to really firm up. And there is no better feeling.
If you tried SolrCloud in the early days and ran into issues, I really encourage you to take another look at 4.6.1 when it comes out in a few days. Sure, there is a lot we still have planned, some rough edges in different spots, plenty of improvements and additions coming - but what is there is starting to look and behave pretty nicely. In my opinion, the future for Solr and SolrCloud is looking bright.
I’ve been using our local Lidl recently, because their policy of regularly baking throughout the day means I can pick up fresh croissants and pains au chocolat whenever I go, whereas the local Tesco, Sainsburys, and Waitrose have usually run out by mid-morning. Are the so-called discount supermarkets really cheaper than the mainstream supermarkets? Here’s the result of one unscientific survey.This morning I checked my till receipt against Tesco online. There are some items that cost the same regardless of which supermarket (fabric softener, fresh orange juice). There are some items that don’t have direct equivalents across stores, so price comparisons aren’t possible. And there are some items where the price is not significantly different (fresh milk, toilet paper). On today’s basket of comparable items, Lidl was £10.62 cheaper (costing £18.46 instead of £29.08). There are some real eye-openers. Eggs are 1.5x more expensive at Tesco. Fresh vegetables were often almost twice the price at Tesco. And what about my fresh croissants and pains au chocolat? £0.29 and £0.39 at Lidl, vs £0.80 each at Tesco. Over twice the price — on today’s shop, buying just these alone saved me £4.70. And they were fresh from the oven, still warm when I got them home.
this is definitely one to send a consultation document response to
Sugar blocks concrete from setting. This I did not know
Very easy and very nice.
Dry fry the Cs (all curries are mostly made with spices that start with C … seriously), then grind them. In the meantime, or after, fry thinly sliced onions + ginger in olive oil. Once they’re slightly browned, add the spices (including garam masala) and some butter. Fry/stir for a little longer (maybe a couple of minutes). Add the diced parsnips and coat them, then add milk and chicken stock (1:2 ratio) to cover. Boil for about 30 mins, til the parsnips are somewhat soft, then blitz. You probably need some salt, too.
That gives a basic recipe. I added sliced grilled sausages.
I think shredded roast duck legs would also go well, but I haven’t tried yet.
Chilli, either fresh (which would look nice added at the end) or ground, would be good, too, but not necessary.
Who is at fault, Google or Oracle?
For example, WFSTCompletionLookup compiles all suggestions and their weights into a compact Finite State Transducer, enabling fast prefix lookup for basic suggestions.
AnalyzingSuggester improves on this by using an Analyzer to normalize both the suggestions and the user's query so that trivial differences in whitespace, casing, stop-words, synonyms, as determined by the analyzer, do not prevent a suggestion from matching.
Finally, AnalyzingInfixSuggester goes further by allowing infix matches so that words inside each suggestion (not just the prefix) can trigger a match. You can see this one action at the Lucene/Solr Jira search application (e.g., try "python") that I recently created to eat our own dog food. It is also the only suggester implementation so far that supports highlighting (this has proven challenging for the other suggesters).
Yet, a common limitation to all of these suggesters is that they can only suggest from a finite set of previously built suggestions. This may not be a problem if your suggestions are past user queries and you have tons and tons of them (e.g., you are Google). Alternatively, if your universe of suggestions is inherently closed, such as the movie and show titles that Netflix's search will suggest, or all product names on an e-commerce site, then a closed set of suggestions is appropriate.
N-Gram language models
For everyone else, where a high percentage of the incoming queries fall into the never-seen-before long tail, Lucene's newest suggester, FreeTextSuggester, can help! It uses the approach described in this Google blog post.
Rather than precisely matching a previous suggestion, it builds up a simple statistical n-gram language model from all suggestions and looks at the last tokens (plus the prefix of whatever final token the user is typing, if present) to predict the most likely next token.
For example, perhaps the user's query so far is: "flashforge 3d p", and because flashforge is an uncommon brand of 3D printer, this particular suggestion prefix was never added to the suggester. Yet, "3d printer" was a frequently seen phrase in other contexts (different brands). In this case, FreeTextSuggester will see "3d" and the "p" prefix for the next token and predict printer, even though "flashforge 3d printer" was never explicitly added as a suggestion.
You specify the order (N) of the model when you create the suggester: larger values of N require more data to train properly but can make more accurate predictions. All lower order models are also built, so if you specify N=3, you will get trigrams, bigrams and unigrams, all compiled into a single weighted FST for maximum sharing of the text tokens. Of course, larger N will create much larger FSTs. In practice N=3 is the highest you should go, unless you have tons of both suggestions to train and RAM to hold the resulting FST.
To handle sparse data, where a given context (the N-1 prior words) was not seen frequently enough to make accurate predictions, the suggester uses the stupid backoff language model (yes, this is really its name, and yes, it performs well!).
I expect the best way to use this new FreeTextSuggester will be as a fallback: you would first use one of the existing exact match suggesters, but when those suggesters fail to find any suggestions for a given query, because it's "unusual" and has crossed over into the long tail, you then fall back to FreeTextSuggester.
Google seems to use such a modal approach to suggestions as well: if you type "flashforge 3d p" you should see something like this, where each suggestion covers your entire query so far (indeed, Google has heard of the flashforge brand of 3d printer!):
But then if you keep typing and enter "flashforge 3d printer power u", the suggestions change: instead of suggesting an entire query, matching everything I have typed, Google instead suggests the last word or two:
As usual, this feature is very new and likely to contain exciting bugs! See the Jira issue, LUCENE-5214, for details. If you play with this new suggester please start a discussion on the Lucene's user list!
With my latest project I had to work with Vagrant and PostgreSQL. It was my first project using PostgreSQL so I had no tool around to work visually with it.
Here's a random collection of stuff I enjoyed...
- Some of what we did at Danger: The future that everyone forgotthe early cast at Danger was largely from Apple, WebTV, General Magic, and Be. I was one of those Apple people. Joe had invited me to lunch one day—sushi, as I recall—and when we were finished eating he casually mentioned that he and Andy had started a new company and asked if I wanted a job. I said, “yes,” without even asking what the company was going to do. I figured that it would probably be cool.
- Evgeny vs. the internetDepending on whom you ask, Evgeny Morozov is either the most astute, feared, loathed, or useless writer about digital technology working today. Just 29 years old, from an industrial town in Belarus, he appeared as if out of nowhere in the late aughts, amid the conference-goers and problem solvers working to shape our digital futures, a hostile messenger from a faraway land brashly declaring the age of big ideas and interconnected bliss to be, well, bullshit.
- Book Review: Good MathSoftware engineers need to have a good understanding of mathematics. In this newsletter, we review a book written by a geek and aimed at the geek who wants to discover interesting facts about maths.
- Using Rust for an Undergraduate OS CourseThe default language choice for Operating Systems courses is C. Nearly all (at least 90% from my cursory survey, the remainder using Java; if anyone knows of any others, please post in the comments!) current and recent OS courses at major US universities use C for all programming assignments. C is an obvious choice for teaching an Operating Systems course since is it the main implementation language of the most widely used operating systems today (including Unix and all its derivatives), and is also the language used by nearly every operating systems textbook (with the exception of the Silberschatz, Galvin, and Gagne textbook which does come in a Java version). To paraphrase one syllabus, "We use C, because C is the language for programming operating systems."
- Where will we live?A housing shortage that has been building up for the past thirty years is reaching the point of crisis. The party in power, whose late 20th-century figurehead, Margaret Thatcher, did so much to create the problem, is responding by separating off the economically least powerful and squeezing them into the smallest, meanest, most insecure possible living space. In effect, if not in explicit intention, it is a let-the-poor-be-poor crusade, a Campaign for Real Poverty. The government has stopped short of explicitly declaring war on the poor. But how different would the situation be if it had?
- Albanian Survival...it can be done!The Ottomans in their extreme arrogance will only leave a token force of between 2K and 3K to besiege your capital. This is your saving grace. The smaller army will take longer to defeat the garrison and, unless dice rolls perpetually go against you as they are wont to do in critical battles, you should be able to defeat the army with your starting 3K and reset the siege.
- How a 1,500-ton ocean liner turns into a cannibal-rat-infested ghost shipBut both De Rhoodes and the Irish coast guard already tried to find the ocean liner a slew of times last year (paywall), to no avail, as the New Scientist reports. The New Scientist explains that surveillance equipment has some pretty big deficiencies when pitted against the ocean’s vastness.
- Swell that produced huge waves in Hawaii to hit California coast http://www.latimes.com/local/lanow/la-me-ln-high-surf-california-coast-20140123,0,2062739.story#ixzz2rGTNzVwp According to one buoy northwest of the island of Kauai, the surf heading toward Hawaii was at its highest level since 1986, Tom Birchard, a senior forecast for the National Weather Service in Honolulu, told the Los Angeles Times.
- An Illustrated Guide to F1′s Radical New EngineIn recent years Formula 1 has made a big push toward efficiency, and to make the technology propelling guys like Sebastian Vettel and Fernando Alonso around the track at least somewhat relevant to the cars the rest of us drive. They’ve experimented with kinetic energy recovery systems and even toyed with the idea of making the cars run only on electricity in the pits. Beginning this year, teams are are downsizing from a 2.4-liter V8s to 1.6-liter V6s that feature direct injection, turbocharging and a pair of energy recovery systems that pull in juice from exhaust pressure and braking.
I am a proud owner of a NinjaBlocks device that I use to control my home (blinds, hot water, heater, presence detection,…), welcome to the Internet of Things! But that’s a story for next posts, the important thing is that the device is actually a BeagleBone board running Ubuntu connected to an Arduino board.
I thought, what would better than managing the NinjaBlock ubuntu with Puppet? there are a number of files and services I added there and it’d be nice to have puppet installed. But the official Ubuntu packages only offer puppet 2.7.11, so I installed the PuppetLabs repository package and tried to install Puppet 3, failing miserably because there is no Facter build for armhf platform, the same one used in Raspbian.wget https://apt.puppetlabs.com/puppetlabs-release-precise.deb sudo dpkg -i puppetlabs-release-precise.deb sudo apt-get update sudo apt-get install puppet
and got this
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:
The following packages have unmet dependencies:
puppet : Depends: puppet-common (= 3.4.2-1puppetlabs1) but it is not going to be installed
E: Unable to correct problems, you have held broken packages.
When I tried to install all the required packagessudo apt-get install puppet puppet-common facter
then I found the actual error
The following packages have unmet dependencies:
facter : Depends: dmidecode but it is not installable
E: Unable to correct problems, you have held broken packages.
Investigating a bit I found that facter has a dependency on dmidecode that should be optional, as dmidecode is not available for arm platform and is not really needed for facter.
The solution? Rebuild the facter package removing that dependency, easily done with this script. When vi opens just delete the dmidecode dependency and you will get the fixed facter_1.6.18-1puppetlabs1_all.modified.debcurl -O http://apt.puppetlabs.com/pool/precise/main/f/facter/facter_1.6.18-1puppetlabs1_all.deb curl -O curl -O https://gist.github.com/carlossg/8578202/raw/70689b1b74517cc4b0743e54d84dae8375503159/videbcontrol.sh bash videbcontrol.sh facter_1.6.18-1puppetlabs1_all.deb # remove dependency to dmidecode in the editor that opens sudo dpkg -i facter_1.6.18-1puppetlabs1_all.modified.deb # If you want to use ruby 1.9 instead of 1.8 sudo apt-get install libaugeas-ruby1.9.1 ruby1.9.1 # puppet may not install if all the dependencies are not listed sudo apt-get install puppet puppet-common # mark dependencies as automatically installed so they are removed when removing puppet sudo apt-mark auto facter libaugeas-ruby1.9.1 puppet-common
1) Kerberos Signature/Encryption support
Support was added in WSS4J 1.6.2 to obtain a Kerberos ticket from a KDC (Key Distribution Center) and convert it to a BinarySecurityToken to be inserted into the security header of a SOAP request. On the receiving side, support was added to validate the received Kerberos ticket accordingly. In WSS4J 1.6.3, support was added to use the secret key associated with a Kerberos Token to secure (sign and encrypt) the request. However, this functionality came with the limitation that there was no out-of-the-box way to extract the Symmetric Key on the receiving side, to decrypt the request or validate the Signature.
Instead, a KerberosTokenDecoder interface was provided, which defined methods for setting the AP-REQ token and current Subject, and a method to then get a session key. To support signature and encryption with Kerberos on an inbound request, the user had to implement this interface and set it on the KerberosTokenValidator. This process, along with a sample KerberosTokenDecoder implementation which relied on low-level Java APIs, was described in a previous blog post.
In WSS4J 2.0.0, a default implementation of the KerberosTokenDecoder interface is provided that can process Kerberos secret keys correctly, using functionality from Apache Directory. Therefore support for using Kerberos tokens to sign and encrypt SOAP messages is now provided out-of-the-box in WSS4J 2.0.0.
2) Action enhancements
When not using WSS4J with a WS-SecurityPolicy aware stack such as Apache CXF, outbound security is configured via "Actions". The list of supported actions is defined on the WSS4J configuration page, e.g. "UsernameToken", "Signature", "Encrypt", etc. There are some enhancements to "Actions" in WSS4J 2.0.0.
a) Custom Token Action
A new "CustomToken" Action is defined in WSS4J 2.0.0. If this action is defined, a token (DOM Element) will be retrieved from a CallbackHandler via WSPasswordCallback.Usage.CUSTOM_TOKEN and written out as is in the security header. This provides for an easy way to write out tokens that have been retrieved out of band.
b) Action tokens
A significant new feature in WSS4J 2.0.0 for actions is the ability to associate an Action with a corresponding token. In WSS4J 1.6.x, actions are always configured from configuration that is shared between actions. So for example, a Signature action takes its key and algorithm information from the corresponding configuration tags. However, this is problematic if you want to execute two Signature actions, as it's not possible using this approach to use (e.g.) two different Signature algorithms.
In WSS4J 2.0.0, the WSHandler class is configured with a list of HandlerAction Objects, which associate an Action with a SecurityActionToken. Default implementations of the SecurityActionToken interface are provided for signature, encryption etc., and are populated from the standard configuration tags. However, to add some custom configuration for a given Action, you can implement the SecurityActionToken interface to provide configuration specific to that Action.
3) Optional Signature and Encryption parts
In WSS4J 1.6.x, signature and encryption parts are determined respectively by the "signatureParts" and "encryptionParts" configuration tags. However, if a part is specified that does not exist in the request, then an exception is thrown. In WSS4J 2.0.0, two new configuration tags are defined called "optionalSignatureParts" and "optionalEncryptionParts". These specify the parts to sign and encrypt, but no exception is thrown if the parts cannot be located. The tags can be used in an analgous way to the WS-SecurityPolicy SignedParts and EncryptedParts policies, which only require that an Element be Signed/Encrypted if it appears in the request.
4) More sophisticated Basic Security Profile enforcement
Apache WSS4J 1.6.x provided support for enforcing the Basic Security Profile 1.1 (BSP) restrictions. However, one issue with the implementation was that it was not possible to ignore a particular set of rules defined in the specification, enforcement was "all or nothing". In WSS4J 2.0.0, it is possible to disable all Basic Security Profile rules as per WSS4J 1.6.x. However, it is now possible to specify individual rules to ignore. The RequestData class has a setIgnoredBSPRules method, that takes a list of BSPRule Objects as an argument. The BSPRule class contains a complete list of Basic Security Profile rules that are enforced in WSS4J.
Note that when BSP Compliance was switched off on the outbound side in WSS4J 1.6.x, it had the effect that an InclusiveNamespaces PrefixList was not generated as a CanonicalizationMethod child of a Signature Element (as required by the BSP specification). In WSS4J 2.0.0, this is now controlled by a separate configuration tag "addInclusivePrefixes", which defaults to true.
5) Configurable SAML SubjectConfirmation validation
WSS4J enforces the SubjectConfirmation requirements of an inbound SAML Token. For sender-vouches, a Signature must be present that covers both the SOAP Body and the SAML Assertion. For holder-of-key, a Signature must be present that signs some part of the SOAP request using the key information contained in the SAML Subject. In WSS4J 2.0.0, a configuration tag is defined that allows the user to switch off this validation if required. The configuration tag is defined as ConfigurationConstants.VALIDATE_SAML_SUBJECT_CONFIRMATION ("validateSamlSubjectConfirmation"), which defaults to true.
This is first expanded to use Array.prototype.slice to ensure the update happens in place:a.splice(0, a.length, *a.flatten())
Next, the method call is rewritten to use the underscore flatten function:a.splice(0, a.length, *_.flatten(a))
Tim Bray will be pleased to hear that Ruby2js currently maps a.sort() to_.sortBy(a, _.identity)
Staggeringly inept. The UK national porn filter blocks based on a regexp match of the URL against /.*sex.*/i — the good old “Scunthorpe problem”. Better, it returns a 404 response. This is also a good demonstration of how web filtering has unintended side effects, breaking third-party software updates with its false positives.The update to online strategy game League of Legends was disrupted by the internet filter because the software attempted to access files that accidentally include the word “sex” in the middle of their file names. The block resulted in the update failing with “file not found” errors, which are usually created by missing files or broken updates on the part of the developers.
This article is frequently on target; this secrecy (both around open source and publishing papers) was one of the reasons I left Amazon.Of the sources with whom we spoke, many indicated that Amazon’s lack of participation was a key reason for why people left the company – or never joined at all. This is why Amazon’s strategy of maintaining secrecy may derail the e-retailer’s future if it struggles to hire the best talent. [...] “In many cases in the big companies and all the small startups, your Github profile is your resume,” explained another former Amazonian. “When I look at developers that’s what I’m looking for, [but] they go to Amazon and that resume stops … It absolutely affects the quality of their hires.” “You had no portfolio you could share with the world,” said another insider on life after working at Amazon. “The argument this was necessary to attract talent and to retain talent completely fell on deaf ears.”
‘That address — which is home to some 2,000 companies on paper — was the subject of a lengthy 2011 Reuters investigation that found that among the entities registered to the address were a shell company controlled by a jailed former Ukraine prime minister; the owner of a company charged with helping online poker operators evade an Internet gambling ban; and one entity that was banned from government contracts after selling counterfeit truck parts to the Pentagon.’
A couple of weeks ago, I agreed to give a talk at Infrastructure.Next about the Ceilometer component of OpenStack. Immediately afterwards, I regretted this, simply because I'm not exactly an expert on Ceilometer. But I've often said that the best way to learn something is to teach it, and what better way to learn about Ceilometer than prepare a presentation about it.
I also find that the folks with the in-depth technical knowledge of a subject might not be the right ones to give intro talks, because they tend to get into the weeds before their audience can get a big picture.
And so I started on a quest to understand Ceilometer, get some basic reporting working, and put together a howto style presentation on reporting with Ceilometer.
It turns out that several things worked very strongly in my favor:
* Ceilometer is installed and enabled by default when you install RDO. So there was no difficulty in getting it installed and configured.
* The documentation has lots of examples in it, and the API works exactly as documented.
* My presentation is only a half hour, rather than the hour that I initially thought it was, so I ended up having to trim the content, rather than come up with additional examples.
Along the way, I got tired of trying to issue HTTP API requests from the command line, and parse the response. Being a Perl guy, I started to write some perl code around this, and before I knew it, I had a full module to do all of the stuff that I wanted for my presentation.
It's up on Github at https://github.com/rbowen/NetOpenStackCeilometer and I expect I'll put it on CPAN eventually, once it stabilizes a little. In particular, the statistics() method lacks a lot of the capabilities of Ceilometer's statistics functionality, and does only what I needed for my talk. Also the interface is kind of icky.
I should note that there are some other OpenStack modules already on CPAN, and this one takes a very different approach. This is the main reason I haven't put this on CPAN yet. The other modules, by Naveed Massjouni, use Moose, and I have not yet used Moose for anything. I'm reluctant to put my stuff on CPAN while it uses such a different approach.
Patches welcome. I'd love to hear if you find this at all useful.
Come see me at Infrastructure.Next.
‘Lightweight performance tools’.Likwid stands for ‘Like I knew what I am doing’. This project contributes easy to use command line tools for Linux to support programmers in developing high performance multi-threaded programs. It contains the following tools: likwid-topology: Show the thread and cache topology likwid-perfctr: Measure hardware performance counters on Intel and AMD processors likwid-features: Show and Toggle hardware prefetch control bits on Intel Core 2 processors likwid-pin: Pin your threaded application without touching your code (supports pthreads, Intel OpenMP and gcc OpenMP) likwid-bench: Benchmarking framework allowing rapid prototyping of threaded assembly kernels likwid-mpirun: Script enabling simple and flexible pinning of MPI and MPI/threaded hybrid applications likwid-perfscope: Frontend for likwid-perfctr timeline mode. Allows live plotting of performance metrics. likwid-powermeter: Tool for accessing RAPL counters and query Turbo mode steps on Intel processor. likwid-memsweeper: Tool to cleanup ccNUMA memory domains. No kernel patching required. (via kellabyte)
It's now been 6 months since the new Bay Bridge went into operation, so it's time to take the old bridge down.
Wired has a nice survey article: The Dangerous Art of Tearing Down Bridges, Dams, and Aircraft Carriers.The piers of the cantilever truss aren’t holding the bridge up. They’re holding it down. “This is like a highly strung bow,” says senior bridge engineer Brian Maroney. (A bow made of 50 million pounds of steel.) “You don’t want to just cut the bow because the thing will fly off in all directions.” So crews will first remove the pavement on the upper deck to lighten the bridge’s load and reduce the tension. Next they’ll isolate steel supports, jacking them out of tension until they can be cut without whipping apart. Then they’ll slowly release the jacks.
The SF Chronicle has another nice article: Demolition crews start chipping away at old Bay BridgeAll told, demolition crews will remove 58,209 tons of steel and 245,470 tons of concrete that make up the 1.97-mile eastern span. The contractors will determine where the crushed and twisted remains of the bridge end up, Gordon said. Most will probably be either recycled or reused. Some pieces may be saved for a park planned at the eastern end of the bridge so people have something from the old span to remember.
Interestingly, even though this is a 300 million dollar project (at least) and will take 3 years (at least), it is a lot harder to find up-to-date status information about the demolition project, beyond high level summary documents.
I guess people aren't as interested anymore. Just cut it down and get it out of there.
A number of older bridges across the San Francisco Bay have been removed in recent years, most notably the old Carquinez Straits bridge.
Soon, the East Span will disappear.
PaaS variationsThe differences between PaaS solutions is best explained by this picture from AWS FAQ about application management.
There is clearly a spectrum that goes from operational control to pure application deployment. We could argue that true PaaS abstracts the operational details and that management of the underlying infrastructure should be totally hidden, that said, automation of virtual infrastructure deployment has reached such a sophisticated state that it blurs the definition between IaaS and PaaS. Not suprisingly AWS offers services that covers the entire spectrum.
Since I am more on the operation side, I tend to see a PaaS as an infrastructure automation framework. For instance I look for tools to deploy a MongoDB cluster or a RiakCS cluster. I am not looking for an abstract plaform that has Monogdb pre-installed and where I can turn a knob to increase the size of the cluster or manage my shards. An application person will prefer to look at something like Google App Engine and it's open source version Appscale. I will get back to all these differences in a next post on PaaS but this article by @DavidLinthicum that just came out is a good read.
Support for CloudStackWhat is interesting for the CloudStack community is to look at the support for CloudStack in all these different solutions wherever they are in the application management spectrum.
- Cloudify from Gigaspaces was all over twitter about their support for OpenStack, and I was getting slightly bothered with the lack of CloudStack support. That's why it was great to see Uri Cohen in Amstredam. He delivered a great talk and he gave me a demo of Cloudify. I was very impressed of course by the slick UI but overall by the ability to provision complete application/infrastructure definitions on clouds. Underlying it uses Apache jclouds, so there was no reason that it could not talk to CloudStack. Over christmas Uri did a terrific job and the CloudStack support is now tested and documented. It works not only on the commercial version from Citrix CloudPlatform but also with Apache CloudStack. And of course it works with my neighbors Cloud exoscale
- Slipstream is not widely known but worth a look. At CCC @lemeb demoed a CloudStack driver. Since then, they now offer an hosted version of their slipstream cloud orchestration framework which turns out to be hosted on exoscale CloudStack cloud. Slipstream is more of a Cloud broker than a PaaS but it automates application deployment on multiple clouds abstracting the various cloud APIs and offering application templates for deployments of virtual infrastructure. Check it out.
- Cloudsoft main application deployment engine is brooklyn, it originated from Alex Heneveld contribution to Apache Whirr that I wrote about couple times. But it can use OpenShift for additional level of PaaS. I will need to check with Alex how they are doing this, as I believe Openshift uses LXC. Since CloudStack has LXC support, one ought to be able to use Brooklyn to deploy a LXC cluster on CloudStack and then use OpenShift to manage deployed applications.
- A quick note on OpenShift. As far as I understand, it actually uses a static cluster. The scalability comes from the use of containes in the nodes. So technically you could create an OpenShift cluster in CloudStack, but I don't think we will see OpenShift talking directly to the CloudStack API to add nodes. Openshift bypasses the IaaS APIs. Of course I have not looked at it in a while and I may be wrong :)
- Talking about PaaS for Vagrant is probably a bit far fetched, but it fits the infrastructure deployment criteria and could be compared with AWS OpsWorks. Vagrant helps to define reproducible machines so that devs and ops can actually work on the same base servers. But Vagrant with its plugins can also help deployment on public clouds, and can handle multiple server definitions. So one can look at a Vagrantfile as a template defintion for a virtual infrastructure deployment. As a matter of fact, there are many Vagrant boxes out there to deploy things like Apache Mesos clusters, MongoDB, RiakCS clusters etc. It's not meant to manage that stack in production but at a minimum can help develop it. Vagrant has a CloudStack plugin demoed by Hugo Correia from Klarna at CCC. Exoscale took the bait and created a set of -exoboxes- that's real gold for developers deploying in exoscale and any CloudStack provider should follow suit.
- Which brings me on to Docker, there is currently no support for Docker in CloudStack. We do have LXC support therefore it would not be to hard to have a 'docker' cluster in CloudStack. You could even install docker within an image and deploy that on KVM or Xen. Of course some would argue that using containers within VMs defeats the purpose. In any case, with the Docker remote API you could then manage your containers. OpenStack already has a Docker integration, we will dig deeper into Docker functionality to see how best to integrate it in CloudStack.
- AWS as I mentioned has several PaaS like layers with OpsWorks, CloudFormation, Beanstalk. CloudStack has an EC2 interface but also has a third party solution to enabled CloudFormation. This is still under development but pretty close to full functionality, check out stackmate and its web interface stacktician. With a CF interface to CloudStack we could see a OpSWork and a Beanstalk interface coming in the future.
- Finally, not present at CCC but the leader of PaaS for enterprise is CloudFoundry. I am going to see Andy Piper (@andypiper) in London next week and will make sure to talk to him about the recent CloudStack support that was merged in the cloudfoundry community repo. It came from folks in Japan and I have not had time to test it. Certainly we as a community should look at this very closely to make sure there is outstanding support for CloudFoundry in ACS.
I’ve long been a partial adherent to David Allen’s Getting Things Done. I’m not nearly process oriented enough to do the entire system, but I do try to offload my tasks out of my head into a list form and keep things up to date. Since I’m joining the team behind Wunderlist, I started using it as my personal task keeper. It doesn’t enforce a strict GTD approach, but it is lightweight, fast, and syncs between devices on different operating systems.
The only thing that vexed me about using it as a full-time task manager was that I found resetting the due dates during a review cycle to be a bit of a pain. Then, Benedikt Lehnert clued me in to the fact that on the desktop client, I could right click a task and quickly reset the due dates to something reasonable—Today, tomorrow, or even an outright removal if a task suddenly became fuzzy. Perfect!