Two weeks ago, I had the chance to go to the inaugural O’Reilly Solid conference. Solid was billed as the Software / Hardware / Everywhere conference. The “Internet of Things” is a big buzzword right now, and I am glad that O’Reilly decided not to do an “Internet of Things” conference. There were plenty of “Internet of Things” exhibits and talks, to be sure, but the theme of the conference was bigger than that, and I think rightly so.
There is a lot going on in the hardware space, and Renee DiResta from O’Reilly Alphatech Ventures gave a good talk on hardware trends. The 2014 version of Mary Meeker’s Internet Trends report just came out, and I wonder if DiResta’s hardware trends will someday rise to similar stature. One of my favorite quotes from her talk: “software is eating the world, hardware gives it teeth”.
Changing the dynamics of working with hardware
The most important thing that is happening in hardware is that there is a lot of energy going into making it easy to design and fabricate physical things. This is true for electronic hardware but also for any product that has a physical embodiment.
Nadya Peek from the MIT Center for Bits and Atoms gave a talk on machine tools that can build machine tools. This is important because building and setting up machine tools is one of the critical path tasks in manufacturing physical objects. These tools could help cut the cycle time and cost for tooling. Also, in the spirit of the era, the designs for her machines are open source at http://mtm.cba.mit.edu/
Carl Bass, the CEO of Autodesk talked about how Autodesk took a step back and reimagined Autocad in the the modern world, against three principles:
- “Infinite computing” – treat computing power as if it were super cheap
- “Cloud based” – for collaboration and for delivery – you can get Autodesk Fusion 360 for $25/mo
- “Computational design” – put computing power to work to do parallel search / exploration of designs
The bottom line here is that anyone with $25/mo can have access to start of the art 3d CAD/CAM tools, based on the AutoCAD platform used by designers and engineers everywhere.
Microsoft did a live coding demo where they using an Intel Quark board to build a sound pressure meter. This was coupled with some very nice integration with Visual Studio. I’m not a big Microsoft/Intel fan for this space, but I think that this showed the potential for improving the toolchain and round trip experience for platforms like Arduino.
Julie bought our 11 year old a set of Littlebits as a way to get her some knowledge about electronics. She is a very hands on kind of learner and Littlebits seemed like a good vehicle for her. I didn’t do much homework on this because Julie did most of the research. So my impression of Littlebits was that it was sort of aimed at a kids education kind of space. Some of their marketing like “Lego for the iPad generation”, also gives that impression. Ayah Bdeir, the CEO of gave a great talk on LIttlebits and the vision of where it is going. The notion of making electronics very accessible, to the point where it can be viewed as just another material in someone’s creative toolbox, really resonated with the theme of Solid, and with our own angle on Littlebits. Littlebits is a great modular hardware platform that makes it easy to prototype electronics very rapidly, and it’s the first step in a longer journey of “materializing” electronics. It was a plus to be able to stop by the Littlebits booth and thank Ayah for Littlebits and the fact that I had to explain PWM to my 11 year old.
There were several talks which stood out because they painted a good picture of some things which are further out, but which appear to be obtainable given consistent application of product improvement and sufficient resources.
Neil Gershenfeld, the director of the MIT Center for Bits and Atoms, talked about fabrication as digital process. The key idea is that at the moment all the knowledge about fabrication is in the machinery doing the fabrication, not in the material being fabricated. He talked about what becomes possible when physical fabrication become more akin to programming.
Hiroshi Ishii from the MIT Tangible Media group talked about Tangible Bits (user interfaces based on direct manipulation of physical embodiments of bits) and Radical Atoms (materials that can change form based on digital information)
Ivan Poupyrev is now at Google but was a part of the Disney Research labs. He demonstrated several inventions from his time at Disney. Touche, which can turn lots of ordinary surface into touch sensitive input devices. He demonstrated a way to generate minute amounts of power locally without batteries, via rubbing pieces of paper together (as in turning the pages of a book). His final demo, Aireal, is a way of using puffs of air to create invisible haptic displays. The overall theme for all of these inventions was “How can we make the whole world interactive”?
Beth Comstock is the Chief Marketing Officer at GE. She talked about the fact that GE makes a huge range of machines of all kinds, and has experience with all kinds of materials and electronics. GE is looking ahead to taking those machines and enhancing them and the data that they produce via software. GE is a physical first company that is in the process of becoming digital, and leading us into a Brilliant Age. In this age:
- We’re going to have to learn to speak industrial machine – we need to be able to deal with the immense amount of data generated by sophisticated machines
- The Selfish Machine – machines will use data to introspect about their own performance and operation and alter their behavior, ask humans for help, or provide input for the design of the next generation
- The Selfless Machine – machines will be able to exchange data with each other to coordinate with other machines
- Machine Knock Down that Wall – machines will impact the process of making machines – rapid iteration of hardware design, open innovation processes
In some ways, robotics is one of the culiminations of the software / hardware / everywhere mantra. There were lots of talks and exhibits on robotics at Solid. Here are 2 that stood out.
Rod Brooks is the founder of Rethink Robotics and a long time professor of robotics at MIT. I did some robotics as a grad student. Rod’s talk was a great way to get a small glimpse at the current state of the art. His company is doing some really interesting work in making industrial robots more flexible and easier to work with. One of the interesting points in his talk was that they are now working on ways to build robotic arms out of cheaper materials/hardware by leveraging more sophisticated software to compensate for the flaws of cheaper materials. At the end of his talk, he outlined for challenges for robotics:
- Visual object recognition on a par with a 2 year old
- The language capabilities (not vocabulary) of a 4 year old – noisy environments, grammer
- Dexterity of a 6 year old
- The social understanding (model of the world) of an 8 year old.
The other talk was Carin Meier’s robotic dance party. This one was purely selfish. My 13 year old is interested in space/robotics, so when Carin did a version of this keynote at OSCON, I showed my daughter the video. She ended up doing a school project on drones. As part of the way her school does projects, she needed to get an “expert”, so I gave her Carin’s email. That eventually resulted in some tweets. For Solid, Carin expanded the scope of the dancing and added some nice music control via Clojure’s overtone library. It was fun to find her afterwards and thank her in person for helping / inspiring my daughter.
The most motivational thing about Solid was a mix of ideas from Astro Teller’s talk about GoogleX, and a quote from Jeff Hammerbacher. Teller said (my paraphrase) “most of the world’s real problems are physical in nature, you can’t just solve them with software”. Hammerbacher is responsible for a quote that went around the internet back in 2011 that goes something like: “The best minds of my generation are thinking about how to make people click ads”. A lot of bright people have gone into software because the malleability of software and the internet means that you can make something and see the impact of it in a very short time. I’m excited to see progress in making hardware development more accessible and rapid. Perhaps that will lead to more bright minds finding ways to solve important physical world problems.
In a slight conflation of names, Driessen's workflow model has been known by the name of a toolset that he also contributed, which helps implement the model: git-flow.
In the years since, there have been a wealth of variations, elaborations, and alternatives tossed into the ring. Reading them is a fascinating way to keep up with the ongoing debate about how teams work, and about how their tools can help them work.
- A successful Git branching modelWe consider origin/master to be the main branch where the source code of HEAD always reflects a production-ready state.
We consider origin/develop to be the main branch where the source code of HEAD always reflects a state with the latest delivered development changes for the next release. Some would call this the “integration branch”. This is where any automatic nightly builds are built from.
When the source code in the develop branch reaches a stable point and is ready to be released, all of the changes should be merged back into master somehow and then tagged with a release number. How this is done in detail will be discussed further on.
Therefore, each time when changes are merged back into master, this is a new production release by definition. We tend to be very strict at this, so that theoretically, we could use a Git hook script to automatically build and roll-out our software to our production servers everytime there was a commit on master.
- Why Aren't You Using git-flow?I’m astounded that some people never heard of it before, so in this article I’ll try to tell you why it can make you happy and cheerful all day.
- git-flow CheatsheetGit-flow is a merge based solution. It doesn't rebase feature branches.
- Issues with git-flowAt GitHub, we do not use git-flow. We use, and always have used, a much simpler Git workflow.
Its simplicity gives it a number of advantages. One is that it’s easy for people to understand, which means they can pick it up quickly and they rarely if ever mess it up or have to undo steps they did wrong. Another is that we don’t need a wrapper script to help enforce it or follow it, so using GUIs and such are not a problem.
- On DVCS, continuous integration, and feature branchesThe larger point I’m trying to make is this. One of the most important practices that enables early and continuous delivery of valuable software is making sure that your system is always working. The best way for developers to contribute to this goal is by ensuring they minimize the risk that any given change they make to the system will break it. This is achieved by keeping changes small, continuously integrating them into mainline, and making sure there is a comprehensive suite of automated tests to verify that changes behave as expected and don’t introduce any regressions.
- My Current Java WorkflowI then setup a Jenkins job called “module-example snapshot”. This checks out any pushes to the develop branch, runs the gradle build task on it (which runs tests and produces artifacts on successful test passes) and then pushes a snapshot release to our in house artifactory server. This means any push to develop will trigger a build that releases a snapshot jar of that module that others could use for their development.
- GitFlow and Continuous IntegrationSo, what does one do with this information? Is use of GitFlow or promiscuous integration a bad idea? I think that it can work very well for some teams and could be very dangerous in others. In general, I like it when the VCS stays out of the way and the team gets in the habit of pushing changes and looking to the CI server validation that everything is ok. Introducing promiscuous integration could interrupt this cycle and allow code changes to circumvention the mainline longer than they should. This branching scheme feels complex, even with the addition of GitFlow.
- Another Git branching modelBut cheap merging is not enough, you also need to be able to easily pick what to merge. And with Git Flow it’s not easy to remove a feature from a release branch once it’s there. Because a feature branch is started from develop it is bound by its parents commits to other features not yet in production. As a result, if you merge a feature without rebasing you always get more commits than wanted.
- Branch-per-FeatureMost of this way of working started from the excellent post called “A Successful Git Branching Model”. The important addition to this process is the idea that you start all features in an iteration from a common point. This would be what you released for the last one. This drives home the granular, atomic, flexible nature that features must exhibit for us to deliver to business in the most effective way. Git flow allows commits to be done on dev branches. This workflow does not allow that.
- git bugfix branches: choose the root wiselyThe solution I like best involves first finding the commit that introduced the bug, and then branching from there. A command that is invaluable for this is git blame. It is so useful, I would recommend learning it right after commit, checkout, and merge. The resulting repository now looks like Figure 3, where we have found the source of the bug deep inside the development history.
- Some thoughts on continuous integration and branching management with gitGiven a stable/production branch P, and a set of feature branches, say, FB1, FB2 and FB3, I want a system that:
- Combines (merge) P with every branch and test it, say P + FB1, P + FB2, P + FB3.
- Select the successful branches and try to merge them together. Say FB2 failed, so we would try to build and test P + FB1 + FB3 and make it a release candidate.
- Should a conflict appear, notify the developers so they can fix it.
- The conflict resolution is saved so not to happen again.
- The process is repeated continuously.
- A (Simpler) Successful Git Branching ModelAt my work, we have been using a Git branching strategy based on Vincent Driessen’s successful Git branching model. Over all, the strategy that Vincent proposes is very good and may work perfectly out of the box for many cases. However, since starting to use it I have noticed a few problems as time goes on
- Two Git Branching ModelsIn current projects, we tend to float between two branching models depending on the requirements of the customer / project and the planned deployment process.
- Git Branching ModelA workflow for contributions is usually based on topic branches. Instead of committing to the particular version branch directly, a separate branch is made for a particular feature or bugfix where that change can be developed in isolation. When ready, that topic branch is then merged into the version branch.
- What is Your Branching Model?Perforce from the middle 90’s and Subversion from 2001 promoted a trunk model, although neither preclude other branching models. Google have the world biggest Trunk-Based-Development setup, although some teams there are going to say they are closer to Continuous Deployment (below). Facebook are here too.
- Git Tutorials: Git WorkflowsThe array of possible workflows can make it hard to know where to begin when implementing Git in the workplace. This page provides a starting point by surveying the most common Git workflows for enterprise teams.
As you read through, remember that these workflows are designed to be guidelines rather than concrete rules. We want to show you what’s possible, so you can mix and match aspects from different workflows to suit your individual needs.
- Git Branching - Branching WorkflowsNow that you have the basics of branching and merging down, what can or should you do with them? In this section, we’ll cover some common workflows that this lightweight branching makes possible, so you can decide if you would like to incorporate it into your own development cycle.
I was interested to stumble across the web document: The Design Of SQLite4.SQLite4 is an alternative, not a replacement, for SQLite3. SQLite3 is not going away. SQLite3 and SQLite4 will be supported in parallel. The SQLite3 legacy will not be abandoned. SQLite3 will continue to be maintained and improved. But designers of new systems will now have the option to select SQLite4 instead of SQLite3 if desired.
SQLite4 strives to keep the best features of SQLite3 while addressing issues with SQLite3 that can not be fixed without breaking compatibility.
It surprised me to learn that "internally, SQLite3 simply treats that PRIMARY KEY as a UNIQUE constraint. The actual key used for storage in SQLite is the rowid associated with each row." I think it is good that SQLite 4 will amend that choice, and treat PRIMARY KEY more as a DBA would expect it to behave.
I like the fact that, overall, SQLite is continuing to move toward a more standard and "correct" implementation, by doing things like requiring that PRIMARY KEY columns be non-null, and turning foreign key constraints on by default.
Overall, it looks like SQLite is heading in a good direction and I'm pleased to hear that.
The overall goals of SQLite are very similar to Derby, which I am considerably more familiar with.
The Derby community, too, continues to remain active. Here's the plan for the next major Derby release, which will be coming out this summer: 10.11.1 Release Summary:
- MERGE statement
- Deferrable constraints
- WHEN clause in CREATE TRIGGER
- Rolling log file
- Experimental Lucene support
- Simple case expression
- New SYSCS_UTIL.SYSCS_PEEK_AT_IDENTITY function
- Use sequence generators to implement identity columns
- add HoldForConnection ij command to match NoHoldForConnection
Although some of those features are pretty small, a few of them are large, dramatic steps forward (MERGE, CREATE TRIGGER WHEN, deferrable constraints, the Lucene integration)
In my own professional and personal life, I haven't been spending as much time with Derby recently. I no longer write code in Java for 50 hours every week, so it's hard for me to find either time or excuses to be intimately involved with Derby.
However, I try to follow along as best I can, monitoring the email lists, spending time in the Derby communities in places like Stack Overflow, and generally keeping in contact with that team, because there's a superb community of brilliant engineers working on Derby, and I don't want to lose touch with them.
So: way to go, SQLite, and way to go: Derby!
Mac OS X Server has an integrated http server. The configuration files for the http server are now under /Library/Server/Web/Config/apache2
During the upgrade the configuration files are transformed.
In my case the loading of the .so files for Subversion in the httpd_server_app.conf file was removed, a special site file svn.conf was disabled by being renamed to svn.conf.SyntaxError, Also ProxyPass directives present in several of my site files were removed.
Maybe I should communicate these problems upstream to Apple ... or stop using the web server of Maverick and set up my own.
Oh and on top of that my Mail services were not working any more. I was not able to receive outside mail any more. It looks like I was receiving outside mail on port 465 and this does not work any more, I have had to open the port 25.
Of course, record-keeping, in some sense, began quite a bit more than "nearly a century ago".
From William Brewer, via Tom Hilton's marvelous web-zine, May 30, 1864: East of Pacheco PassAll around the house it looks desolate. Where there were green pastures when we camped here two years ago, now all is dry, dusty, bare ground. Three hundred cattle have died by the miserable water hole back of the house, where we get water to drink, and their stench pollutes the air.
Drought in California is nothing new, as Brewer's journals document.
But weather is a funny thing, as Cliff Mass notes:Here is a notice released by the Seattle National Weather Service office:
.CLIMATE...THE RAINFALL TOTAL AT SEATTLE-TACOMA AIRPORT WAS 0.22 INCHES SUNDAY. THIS MAKES THE RAINFALL TOTAL SINCE FEBRUARY 1ST 22.87 INCHES. THIS BREAKS THE RECORD FOR THE WETTEST FEBRUARY THROUGH JULY IN SEATTLE. THE OLD RECORD WAS 22.81 INCHES SET IN 1972. FELTON/MCDONNAL
You knew this was a wet late winter/spring, particularly mid-February through mid-March. But to beat the Feb-July record in MAY is really notable.
In two months, I'm hoping to take my annual backpacking trip. We're planning to visit a lake at 10,800 feet.
I've given up on hoping that there will be snow at that altitude, though in normal years a late July visit might find 3 feet of snow there.
I am, still, hoping that there will be a lake there.
And that there won't be a repeat of last year's fire season.
So I'm going through my backpacking gear, getting it in order, being optimistic.
an open letter from Cory Doctorow to teen readers re privacy. ‘The problem with being a “digital native” is that it transforms all of your screw-ups into revealed deep truths about how humans are supposed to use the Internet. So if you make mistakes with your Internet privacy, not only do the companies who set the stage for those mistakes (and profited from them) get off Scot-free, but everyone else who raises privacy concerns is dismissed out of hand. After all, if the “digital natives” supposedly don’t care about their privacy, then anyone who does is a laughable, dinosauric idiot, who isn’t Down With the Kids.’
Interesting approach. Potentially risky, though — heavy use of anycast on a large-scale datacenter network could increase the scale of the OSPF graph, which scales exponentially. This can have major side effects on OSPF reconvergence time, which creates an interesting class of network outage in the event of OSPF flapping. Having said that, an active/passive failover LB pair will already announce a single anycast virtual IP anyway, so, assuming there are a similar number of anycast IPs in the end, it may not have any negative side effects. There’s also the inherent limitation noted in the second-to-last paragraph; ‘It comes down to what your hardware router can handle for ECMP. I know a Juniper MX240 can handle 16 next-hops, and have heard rumors that a software update will bump this to 64, but again this is something to keep in mind’. Taking a leaf from the LB design, and using BGP to load-balance across a smaller set of haproxy instances, would seem like a good approach to scale up.
this is great. lovely, silly, HTML5 dataviz, with lots of spinning globes and wobbling sines on a black background
good docs from Riak
Digging into broken Bitcoin scripts in the blockchain. Fascinating:While analyzing coinbase transactions, I came across another interesting bug that lost bitcoins. Some transactions have the meaningless and unredeemable script: OP_IFDUP OP_IF OP_2SWAP OP_VERIFY OP_2OVER OP_DEPTH That script turns out to be the ASCII text script. Instead of putting the redemption script into the transaction, the P2Pool miners accidentally put in the literal word “script”. The associated bitcoins are lost forever due to this error. (via Nelson)
aka. lock acquisition. ex-Amazon-Dublin lingo, observed in the wild ;)
The ASF recently held it’s Annual Member’s Meeting where all Members of the Foundation cast ballots in the annual election for the Board. We are lucky to have had a number of excellent candidates for the board as always.
The new board comprises:
- Rich Bowen
- Doug Cutting
- Bertrand Delacretaz
- Ross Gardler
- Jim Jagielski
- Chris Mattmann
- Brett Porter (chairman)
- Sam Ruby
- Greg Stein
I also keep a graphical history of the ASF board.
As the ASF grows in projects, communities, and Members, we’re looking forward to continuing to support our now 151 top level Apache projects going forward!
I recently received an email from a former co-worker. She was curious to know what I read/do to know what it is "trending" in the software world. I think this is good knowledge to share, and I'm also interested in what others do to keep up. Here's my response to her:
My technique for staying up-to-date is mostly reading, and attending some user group meetings. For reading, I read news.ycombinator.com, as well as infoq.com - who I now write for. DZone.com (esp. Javalobby and its HTML5 Zone) is also pretty good, as is arstechnica.com. I don't read nearly as much as I used to when I was subscribed to all of their RSS feeds and read them religiously.
Nowadays, most of my information comes from Twitter. I follow people that are involved in technologies I'm interested in. I try to keep the number of people I follow to 50 as I don't want to spend too much time reading tweets.
For meetups, most are on meetup.com these days. I'd find a couple that have technologies you're interested in (e.g. a local HTML5 meetup or Java user group) and join the group. You'll get email notifications when they have meetings.
Other than that, sometimes I do "conference driven learning". I'll pick a few technologies I'm interested in learning, submit a talk to a conference or user group, then be forced to learn and present on them when it gets accepted. It can be stressful, but it works and usually results in a good presentation because I can share the experience of learning.
One interesting thing I've realized about Twitter is I can make technologies seem "hot" based on the people I follow. If I'm following a bunch of AngularJS folks, my feed is filled with Angular-related tweets and it seems like the hottest technology ever. If I tweak who I follow to have a bunch of Groovy enthusiasts, or Scala folks, the same thing happens.
Of course, the best way to learn new technologies is to use them in your daily job. I strive to do this with my clients, but it doesn't always work out. I've found that working on open source projects and speaking at conferences can help you learn if you're in a stagnant environment. Then again, if you're not happy at work, quit.
What do you do to stay on top of emerging trends in technology?
The new projects are as follows:
- cxf-x509: This shows how to use X.509 tokens for authentication and authorization. The service has a TransportBinding policy with an EndorsingSupportingToken X509Token policy. The roles of the authenticated client are mocked by a WSS4J Validator for this demo, but could be retrieved from (e.g) an ldap backend in a real-world demo.
- cxf-sts: The service in this demo has a TransportBinding policy with an EndorsingSupportingToken IssuedToken policy, requiring a SAML 2.0 token in a client request. The client obtains a SAML token from the CXF SecurityTokenService (STS) and includes it in the service request (also signing the request using the private key which corresponds to the certificate in the SAML token). An Authorization test is also available which uses Claims in the policy to tell the STS to add the roles of the client in the SAML token, which are then used for RBAC on the service side.
- cxf-sts-xacml: Similar to the cxf-sts demo, this testcase requires a SAML 2.0 token from the STS with the roles of the client embedded in the token. The service is then configured to create a XACML request and dispatch it to a Policy Decision Point (PDP) for authorization. The service endpoint then enforces the authorization decision of the PDP. This demo ships with a mocked PDP implementation. For an enterprise-grade PDP which works with CXF, please see Talend ESB.
- cxf-kerberos: The service in this demo requires a Kerberos token over TLS. A Kerberos KDC is started as part of the demo, and a CXF JAX-WS client obtains a token and sends it across to the service for authentication. Spnego is also demonstrated as part of this test-case.
"how many people will be having showers?"
"oh, three of us"
"OK, here are three vouchers for hot water. Keep them handy as you'll need to retype them at random points in the day"
"thank you. Is the login screen in a random EU language and in a font that looks really tiny when I try to enter it, with a random set of characters that are near impossible to type reliably on an on-screen keyboard especially as the UI immediately converts them to * symbols out of a misguided fear that someone will be looking over my shoulder trying to steal some shower-time?"
"Why, yes -how very perceptive of you. Oh, one more thing -hot water quotas"
"hot water quotas?"
"yes, every voucher is good for 100 Litres of water/day. If you go over that rate then you will be billed at 20 c/Litre."
That's a lot!
"Yes, we recommend you only have quick showers. But don't worry, the flow rate of the shower is very low on this hot water scheme, so you can still have three minutes worth of showering without having to worry"
"'this' hot water scheme?"
"yes -you can buy a premium-hot-water-upgrade that not only gives you 500L/day, it doubles the flow rate of the shower.
"oh, I think I will just go to the cafe round the corner -they have free hot water without any need for a login"
"if that is what you want. Is there anything else?"
"Yes, where is my room?"
"It's on the 17 floor -the stairs are over there. With your luggage you could get everything up in two goes -it will only take about fifteen minutes"
"17 floors! Fifteen Minutes! Don't you have a lift?"
"Ah -do you mean our premium automated floor-transport service? Why yes, we do have one. It won't even add much to your bill. Would you like to buy a login? First -how many people will plan on using the lift every day -and how many times?
What a headline. interesting story to boot (via Eoin)
Rather horrific update from the trenches of Mozilla
Lots and lots of good book recommendations, a little US-centric though
Had a great 3 day weekend, with all my daughters in town; we took my granddaughter on a nice hike in Roy's Redwoods Preserve in Marin County.
- 1200 Feet Long, Loaded, Under TowThe vessel used for this exercise was CMA CGM’s Centaurus, an 11400 TEU container ship measuring 365 meters, or approximately 1,200 feet.
The purpose of the towing demonstration was to test the capability of existing tug assets within San Francisco Bay to connect to and tow an ultra-large container vessel.
- A Short On How The Wayback Machine Stores More Pages Than Stars In The Milky WayPlayback is accomplished by binary searching a 2-level index of pointers into the WARC data. The second level of this index is a 20TB compressed sorted list of (url, date, pointer) tuples called CDX records. The first level fits in core, and is a 13GB sorted list of every 3000th entry in the CDX index, with a pointer to larger CDX block.
Index lookup works by binary searching the first level list stored in core, then HTTP range-request loading the appropriate second-level blocks from the CDX index. Finally, web page data is loaded by range-requesting WARC data pointed to by the CDX records. Before final output, link re-writing and other transforms are applied to make playback work correctly in the browser.
- How the Neighborhoods of Manhattan Got Their NamesFor an island of only 24 square miles, Manhattan sure has a lot of neighborhoods. Many have distinct monikers that might not seem intuitive to the lay-tourist, or even to a lifelong New Yorker. Here's where the names of New York's most famous 'hoods came from.
- Slightly More Than 100 Fantastic Pieces of JournalismBy Conor Friedersdorf
- Cisco Goes Straight To The President To Complain About The NSA Intercepting Its HardwareChambers goes even further than Cisco's counsel, decrying the NSA's tactics and the damage they're doing to his company's reputation.
“We simply cannot operate this way; our customers trust us to be able to deliver to their doorsteps products that meet the highest standards of integrity and security,” Chambers wrote. “We understand the real and significant threats that exist in this world, but we must also respect the industry’s relationship of trust with our customers.”
- Cisco's chickens come home to roostI wanted to point out that there's a difference between whining about how your government does something, and building a secure ecosystem.
- Queueing Mechanisms in Modern SwitchesCell-based fabrics solve this problem by slicing the packets into smaller cells (reinventing ATM), and interleaving cells from multiple packets on a single path across the fabric.
- Troubleshooting Riverbed Steelhead WAN OptimizersA group of Riverbed TAC engineers have worked on an internal troubleshooting document to kick start new TAC engineers. It describes the design of the Steelhead appliance, the working of the optimization service and the setup of optimized TCP sessions, installation and operation related issues, various latency optimization related issues, on how to use the various CLI tools to troubleshoot and how you can deal with the contents of the system dump.
- Microsoft’s Most Clever Critic Is Now Building Its New EmpireWhen Alchin offered him the job, Russinovich didn’t take it. But after several more years spent running his Sysinternals site–where he published a steady stream of exposés that, in his words, “pissed off” Microsoft and other tech outfits–he did join the software giant. The company made him a Microsoft Technical Fellow–one of the highest honors it can bestow–and today, he’s one of the principal architects of Microsoft Azure, the cloud computing service that’s leading the company’s push into the modern world.
Did you know that it’s perfectly fine to enjoy programming in both static and dynamic languages?— Honza Pokorny (@_honza) May 26, 2014
What I have learnt however is that using static and dynamic languages too closely is a recipe for pain and frustration. LMAX develops the main exchange in Java and recently started using Spock for unit tests. Spock is amazing in many, many ways but it comes with the Groovy programming language which is dynamically typed. There isn’t, or perhaps shouldn’t be, any tighter coupling than a class and its unit test so the combination of static and dynamic typing in this case is extremely frustrating.
It turns out that the way I work with languages differs depending on the type system – or rather on the tools available but they are largely affected by the type system. In Java I rename methods without a second thought, safe in the knowledge that the IDE will find all references and rename them. Similarly I can add a parameter to an API and then let the compiler find all the places I need to update. Once there are unit tests in Groovy however neither of those options is completely safe (and using the compiler to find errors is a complete non-starter).
With static and dynamic typing too closely mixed, the expectations and approaches to development become muddled and it winds up being a worst-of-both-worlds approach. I can’t count on the compiler helping me out anymore but I still have to spend time making it happy.
A nice visualisation of Single-Transferable-Vote proportional representation in action
I am following instructions here : http://www.emacswiki.org/emacs/TrampMod
Since I have an emacs 24.x tramp works easily, out of the box.
I have recently upgraded my Mac Mini to Mavericks.
The screen sharing from an IMac running Mavericks too is bad. When my mouse pointer is on the screen sharing window, the mouse pointer moves slowly by itself making it difficult to click on the proper window or control.
I found that article http://www.tristanbettany.com/2013/06/17/slow-screen-sharing-in-osx-vnc-performance-issues/ which gives a potential workaround.
In fact I am ending up using JollysFastVNC which works great.
Writing automated tests to prove software works correctly is now well established and relying solely or even primarily on manual testing is considered a “very bad sign”. A comprehensive automated test suite gives us a great deal of confidence that if we break something we’ll find out before it hits production.
Despite that, automated tests shouldn’t be our first line of defence against things going wrong. Sure they’re powerful, but all they can do is point out that something is broken, they can’t do anything to prevent it being broken in the first place.
So when we write tests, we should be asking ourselves, can I prevent this problem from happening in the first place? Is there a different design which makes it impossible for this problem to happen?
For example, checkstyle has support for import control, allowing you to write assertions about what different packages can depend on. So package A can use package B but package C can’t. If you’re concerned about package structure it makes a fair bit of sense. Except that it’s a form of testing and the feedback comes late in the cycle. Much better would be to split the code into separate source trees so that the restrictions are made explicit to the compiler and IDE. That way autocomplete won’t offer suggestions from forbidden packages and the code won’t compile if you use them. It is therefore much harder to do the wrong thing and feedback comes much sooner.
Each time we write an automated test, we’re admitting that there is a reasonable likelihood that someone could mistakenly break it. In the majority of cases an automated test is the best we can do, but we should be on the look out for opportunities replace automated tests with algorithms, designs or tools that eliminate those mistakes in the first place or at least identify them earlier.