It's been hard to find good reporting about the West Africa Ebola crisis, at least here in the states.
Most of the coverage has involved the Dr. Kent Brantly story, which is certainly a compelling story, but there is much more I'd like to know.
I've found a few good articles, though.
A detailed article carried by ABC News helps in explaining why it is so hard to combat the epidemic: Ebola Outbreak Feeds on Fear, Anger, RumorsMany health care workers and aid workers have said one cause for the rapid spread of the Ebola virus is the public's general mistrust of the government. Among the rumors about this disease:
- Ebola does not exist and government workers are using it as an excuse to steal organs to sell on the black market.
- The government is pretending Liberia has Ebola so they'll have an opportunity to receive and then abuse donated funds.
- If a person goes to the hospital with a disease that has symptoms that mirror those of Ebola, such as malaria, that person will end up getting Ebola from the hospital.
- Medical staffers are so afraid to catch Ebola, they neglect patients in the quarantine unit and let them starve to death.
- Because of the noxious fumes that come from the solution workers use to spray affected areas, some people believe the spray is meant to kill them, and they don't want workers to come into their communities.
On the NPR site, transcripts of two interesting interviews:
- Fear, Caution As Doctors Fight Ebola On The GroundSAYAH: We need more people. We need more actors to be involved in educating the population, the communities, sensitizing them about the fact that the key to resolving this is that people come and get treated and not hide their sick and not have secret burials. We have a lot of work ahead. It's now in three countries in multiple sites. We've never seen it before. Doctors Without Borders has stretched its capacity to respond. We're doing the best we can but I think many more actors need to be mobilized.
- Sierra Leone, Struggling With Ebola, Passes On Africa SummitFOFANA: Well, it has been difficult because it's never been here before. And health workers were not prepared for Ebola when eventually it did emerge. And lots of the health workers have died in all three countries. About in the region of 100 health workers have contracted the virus and at least half that number have died. We have been told by some nurses that the personal protective gear that they have been given is not good enough. They say the clothing is very thin, and they do not feel very much secure in them.
- Living in the shadow of EbolaI spent an instructive couple of hours at the weekend with a woman from Finland. Eeva was once a midwife, but she's just finished a five-week stint with a Red Cross team that has been going door to door in Kailahun province, the border region where Ebola first arrived in Sierra Leone.
She was on what's known as a sensitisation mission, explaining to people exactly how the virus spreads and how to avoid it.
There are three simple rules, she told me.
- Ebola outbreak: US experts to head to West AfricaDr Thomas Frieden, director of the Centers for Disease Control and Prevention, announced the new US measures in an interview with ABC's This Week.
"We do know how to stop Ebola. It's old-fashioned plain and simple public health: find the patients, make sure they get treated, find their contacts, track them, educate people, do infection control in hospitals."
It's been 14 months now since the Edward Snowden story broke.
During that time, there has been a conversation of sorts. I wish that more had participated; I wish that more had resulted.
But I'm pleased, at least, that the conversation continues.
Some have been focusing on the economic and commercial aspects of the conversation:
- Personal Privacy Is Only One of the Costs of NSA SurveillanceThe economic costs of NSA surveillance can be difficult to gauge, given that it can be hard to know when the erosion of a company’s business is due solely to anger over government spying. Sometimes, there is little more than anecdotal evidence to go on. But when the German government, for example, specifically cites NSA surveillance as the reason it canceled a lucrative network contract with Verizon, there is little doubt that U.S. spying policies are having a negative impact on business.
- Report Says Backlash From NSA's Surveillance Programs Will Cost Private Sector Billions Of DollarsAlso directly affecting US companies is a future full of increased compliance costs as countries move towards data sovereignty. This means tech companies like Facebook and Google will need to build local data centers if they wish to keep citizens in affected countries as users. The European Parliament's new data protection law could easily result in massive fines for US companies.
Others have been looking at the changing relationship between the American scientific community and its most important patron, the U.S. Government:
- Mathematicians Discuss the Snowden RevelationsThe only reason I am putting these words down now is the feeling of intense betrayal I suffered when I learned how my government and the leadership of my intelligence community took the work I and many others did over many years, with a genuine desire to prevent another 9/11 attack, and subverted it in ways that run totally counter to the founding principles of the United States, that cause huge harm to the US economy, and that moreover almost certainly weaken our ability to defend ourselves.
- The Mathematical Community and the National Security AgencyWe face a variety of threats -- from car accidents, which take about as many lives each month as the 9/11 tragedy, to weather (ranging from sudden disasters, such as hurricanes Katrina and Sandy, to the dangers from climate change), to global avian flu pandemics. The moves taken in the name of fighting terrorism, including the intrusive NSA data collection that has recently come to light and more generally the militarization of our society, are not justified by the dangers we currently face from terrorism.
- NSA and the Snowden IssuesNSA's intelligence activities stem from a foreign-intelligence requirement -- initiated by one or more Executive Branch intelligence consumers (the White House, Department of State, Department of Defense, etc.), vetted through the Justice Department as a valid need -- and run according to a process managed by the Office of the Director of National Intelligence.
- Why were CERT researchers attacking Tor?CERT was set up in the aftermath of the Morris Worm as a clearinghouse for vulnerability information. The purpose of CERT was to (1) prevent attacks by (2) channeling vulnerability information to vendors and eventually (3) informing the public. Yet here, CERT staff (1) carried out a large-scale, long-lasting attack while (2) withholding vulnerability information from the vendor, and now, even after the vulnerability has been fixed, (3) withholding the same information from the public.
- Cryptographer Adi Shamir Prevented from Attending NSA History ConferenceAs a friend of the US I am deeply worried that if you continue to delay visas in such a way, the only thing you will achieve is to alienate many world-famous foreign scientists, forcing them to increase their cooperation with European or Chinese scientists whose countries roll the red carpet for such visits. Is this really in the US best interest?
Best personal wishes, and apologies for not being able to meet you in person,
- US State Department: Let in cryptographers and other scientistsI’ve learned from colleagues that, over the past year, foreign-born scientists have been having enormously more trouble getting visas to enter the US than they used to. The problem, I’m told, is particularly severe for cryptographers: embassy clerks are now instructed to ask specifically whether computer scientists seeking to enter the US work in cryptography. If an applicant answers “yes,” it triggers a special process.
- The ultimate goal of the NSA is total population controlThe lack of official oversight is one of Binney’s key concerns, particularly of the secret Foreign Intelligence Surveillance Court (Fisa), which is held out by NSA defenders as a sign of the surveillance scheme's constitutionality.
“The Fisa court has only the government’s point of view”, he argued. “There are no other views for the judges to consider.
None of these topics are simple; none of these conversations are easy.
We must keep the discussion going.
From the Plym Valley trail. I’ve been meaning to photograph this sculpture for a while, so I took the opportunity when I passed it today.
During one of my previous post I have explained few security patterns that can be used with Java WebSocket applications and how to call them from client side applications including browser based and rich agent based clients. In this post I explain how to secure server side WebSocket endpoints easily, in fact if you are already familiar with security model defined by the Java Servlet specification there is nothing new, you could use same security model for WebSocket server endpoints as well. Let's take an example and discuss, consider following use case.
- Endpoint URL to secure - /securewebsocket
- Transport level security - HTTPS
- Allow roles - admin
- Authentication metod - Basic
Here in this use case we want to secure a WebSocket endpoint deployed on "/securewebsocket" URL. Users with only "admin" role can establish WebSocket connection and they should use SSL for transport level security, additionally server will use HTTP BasicAuth to authenticate users during the handshake.
We can fulfil above security requirement easily by adding following entries into web.xml file.
<display-name>Secure WebSocket Endpoint</display-name>
<web-resource-name>Secure WebSocket Endpoint</web-resource-name>
Now let's let's look at what are the various we could use for authentication and authorization.
1. BASIC This is the basic authentication schema where client sends set of user name and password as a encoded string along with a HTTP header. In case of browser based clients browser pop-up a dialog to enter user name and password.
2. FORM In form based authentication application developers create a HTML login page to send user name and password. This approach is similar to "Basic" but flexible to have customized login page.
3. DIGESTMuch secure than above two options, it specially applies a hash function to the password before sending to he server.
4. CLIENT-CERT This also a better authentication schema where client is authenticated using client's digital certificate.
1. NONE This indicate server should accept any connection including unprotected connections.
2. INTEGRAL This ensures that the data be sent between client and server in such a way that it cannot be changed in transit.
3. CONFIDENTIAL During the data transmissions this ensures other entities can't observer contents of the transmission.
- In practice web servers treat the CONFIDENTIAL and INTEGRAL transport guarantee values identical.
- In both CONFIDENTIAL and INTEGRAL options clients should use secure WebSocket (wss://) protocol.
Sometimes you do care about the positions of the terms, and for such cases Lucene has various so-called proximity queries.
The simplest proximity query is PhraseQuery, to match a specific sequence of tokens such as "Barack Obama". Seen as a graph, a PhraseQuery is a simple linear chain:
By default the phrase must precisely match, but if you set a non-zero slop factor, a document can still match even when the tokens are not exactly in sequence, as long as the edit distance is within the specified slop. For example, "Barack Obama" with a slop factor of 1 will also match a document containing "Barack Hussein Obama" or "Barack H. Obama". It looks like this graph:
Now there are multiple paths through the graph, including an any (*) transition to match an arbitrary token. (Note: while the graph cannot properly express it, this query would also match a document that had the tokens Barack and Obama on top of one another, at the same position, which is a little bit strange!)
In general, proximity queries are more costly on both CPU and IO resources, since they must load, decode and visit another dimension (positions) for each potential document hit. That said, for exact (no slop) matches, using common-grams, shingles and ngrams to index additional "proximity terms" in the index can provide enormous performance improvements in some cases, at the expense of an increase in index size.
MultiPhraseQuery is another proximity query. It generalizes PhraseQuery by allowing more than one token at each position, for example:
This matches any document containing either domain name system or domain name service. MultiPhraseQuery also accepts a slop factor to allow for non-precise matches.
Finally, span queries (e.g. SpanNearQuery, SpanFirstQuery) go even further, allowing you to build up a complex compound query based on positions where each clause matched. What makes them unique is that you can arbitrarily nest them. For example, you could first build a SpanNearQuery matching Barack Obama with slop=1, then another one matching George Bush, and then make another SpanNearQuery, containing both of those as sub-clauses, matching if they appear within 10 terms of one another.
As of Lucene 4.10 there will be a new proximity query to further generalize on MultiPhraseQuery and the span queries: it allows you to directly build an arbitrary automaton expressing how the terms must occur in sequence, including any transitions to handle slop. Here's an example:
This is a very expert query, allowing you fine control over exactly what sequence of tokens constitutes a match. You build the automaton state-by-state and transition-by-transition, including explicitly adding any transitions (sorry, no QueryParser support yet, patches welcome!). Once that's done, the query determinizes the automaton and then uses the same infrastructure (e.g. CompiledAutomaton) that queries like FuzzyQuery use for fast term matching, but applied to term positions instead of term bytes. The query is naively scored like a phrase query, which may not be ideal in some cases.
In addition to this new query there is also a simple utility class, TokenStreamToTermAutomatonQuery, that provides loss-less translation of any graph TokenStream into the equivalent TermAutomatonQuery. This is powerful because it means even arbitrary token stream graphs will be correctly represented at search time, preserving the PositionLengthAttribute that some tokenizers now set.
While this means you can finally correctly apply arbitrary token stream graph synonyms at query-time, because the index still does not store PositionLengthAttribute, index-time synonyms are still not fully correct. That said, it would be simple to build a TokenFilter that writes the position length into a payload, and then to extend the new TermAutomatonQuery to read from the payload and apply that length during matching (patches welcome!).
The query is likely quite slow, because it assumes every term is optional; in many cases it would be easy to determine required terms (e.g. Obama in the above example) and optimize such cases. In the case where the query was derived from a token stream, so that it has no cycles and does not use any transitions, it may be faster to enumerate all phrases accepted by the automaton (Lucene already has the getFiniteStrings API to do this for any automaton) and construct a boolean query from those phrase queries. This would match the same set of documents, also correctly preserving PositionLengthAttribute, but would assign different scores.
The code is very new and there are surely some exciting bugs! But it should be a nice start for any application that needs precise control over where terms occur inside documents.
I'm sorry, I am not well-educated in these areas: IDF declares death of missing officer Hadar GoldinThe IDF spokesman announced early on Sunday morning that at 11:25 p.m. on Saturday, the Chief Rabbi of the IDF, Brigadier Gen. Rafi Peretz, declared the death of IDF officer Lt. Hadar Goldin, who fell in battle in the Gaza Strip on Friday.
The decision was made according to the findings of a special board, headed by the rabbi, who consideration medical, halachic and other relevant considerations.
There is lots of subtlety here that eludes me, sadly: IDF determines 2nd Lt. Hadar Goldin, previously considered captured in Gaza, is deadThe conclusion was based on forensic evidence from the scene of the attack,a statement by the IDF Spokesperson's Unit said. It added that prior to the decision, religious, medical and other relevant issues were taken under consideration.
I fear that behind these carefully considered, carefully presented words there is nothing but sadness.
This is how a tragic friendly fire incident is described, yes?
Please, somebody, somehow, bring these people some peace.
Getting back to work was very strange because the day I went back was the first day for our group in the brand new office. Before the accident, I was working on the build-out of a new office space. My friend and co-worker Whitney and I had been through the design process with an architect and the construction had just begun. Then my accident happened and Whitney had to take over all responsibilities for the project. I had every confidence that she would see the project to completion and do a tremendous job and she absolutely did! The office looks outstanding, you can definitely tell it is a hybris software office by the look and feel. With a capacity of 50 seats, we now have tons of room to fill. It's so good to be back to work with everyone, even if it is only part-time. I look forward to building up to full-time hours in the office eventually, but when I go into the office in the morning and then do PT in the afternoon, I'm pretty exhausted by the end of the day. I will need to work up to it over time.
In early July, my brother and his family arrived for a week to visit. He had not seen me since he left one week after the accident. So he had not seen me out of bed at all. Now I am able to stand in place for periods of time and also walk with the aid of leg braces and a walker. I am walking around the house as much as I can. Janene suggested that I consider walking everywhere in the house that I need to be and it's probably a good idea because it will help my body to get much stronger. There's also very strong research out there that says if you teach the body to do something it once new how to do before the accident that those capabilities will return over time. This is why my physical therapist told me to walk as much as I can. Even when I am in the office I stand up at my desk while I work. Everyone in the office has a motorized standing desk so I stand up as long as I can and then sit back down in my wheelchair when I'm tired.
On Friday, July 18th, I had a follow-up appointment with my surgeon and he gave me the OK to stop wearing the back brace. What a joy to not require that damn thing anymore! I was required to wear it for 12 weeks to prevent twisting or bending over too far. While at the surgeon's office, I also got to see a clear plastic mold of the spine with some of the titanium hardware attached to it. The surgeon said, 'That's exactly what you have on your spine,' and I was very surprised because it appears to be pretty big and bulky. He said it has to be to withstand the rigors of the way the spine works. We also asked him about what seems to be a crookedness of my spine. Without the back brace you can really see it now. When I stand up straight and you look at me, you can see that I'm leaning to the right. Even in the latest x-rays you can see how crooked it is. The surgeon confirmed this and said it's probably due to the spacers that they placed between L3-L4 as the cartilage disc had to removed because it was so damaged. Evidently this is just something I will need to get used to living with now.
OK, since my emergency I’ve had a little time with my new 4g mobile broadband service. And my regular service with Virgin appears to be working again, though now with the redundancy of two networks I wouldn’t necessarily notice the kind of downtime that plagued me before.
The 4G router is an Alcatel “one-touch”, and is only slightly larger than a mobile phone, and runs cool – all very nice. It also has a cradle-cum-power-supply, with micro-USB port for the power supply. It’s not just the cradle that has a port, the router does too, so I thought this has got to be worth a try: yes, if I connect it to a USB port on the Mac, it recognises “mobile broadband” and is connected. Great, that leaves only the (SIP) phones needing a regular ethernet port and therefore the Virgin router or other equipment I don’t have or can’t use with the 4G service.
How about performance? It feels subjectively like a very decent broadband speed, but slower than Virgin during the working day – presumably peak congestion. I tried a speed test on both connections with some of the online speed check services, using the ultrabook over wifi for all tests. First try was early afternoon when the 4g seemed slower; second was in the wee hours when all ISPs in this and nearby timezones should have ample spare capacity.http://www.speedtest.net/ http://www.broadbandspeedchecker.co.uk/ peak time off-peak peak time off-peak Down Up Down Up Down Up Down Up Virgin Cable 31.0 1.96 20.67 1.99 22.14 2.23 32.9 1.89 EE 4G 25.88 5.11 19.64 12.76 9.69 1.07 36.46 9.78
I tried a third speed check service which I’ve used before at http://www.uswitch.com/broadband/speedtest/, but it didn’t work.
What conclusions can I draw? Not very much. It somewhat supports my subjective impression that the 4g service is the more variable of the two. But interesting that 4G upload speeds appear potentially much higher than cable. I guess cable was developed originally just for telly, where only download matters.
Still evaluating what it feels like to live with.
ln -f -s 'libboost_wave.so.1.52.0' 'stage/lib/libboost_wave.so' ...failed updating 1 target...
Then put this into your /etc/portage/package.use: # boost 1.52 is broken with current icu <dev-libs/boost-1.53 -icu
via Christopher Soghoian: ‘IMSI catchers/fake base stations are out of control in China. The gov shut down 24 IMSI catcher factories, 1500+ people were arrested.’
James Tarjan was one of the top chess players in the country when I was learning chess as a child in the early 1970's.
I knew he had stopped playing chess, but I had absolutely no idea that he had become a librarian!
He's apparently been living just 75 miles away from me for the last 25 years, working at the Santa Cruz public library.
Dana MacKenzie takes up the story: Rip Van Winkle Returns.... James Tarjan, a grandmaster and frequent contender for the U.S. championship in the 1970s who has not played a single tournament game since 1984. He dropped out of chess and for at least the last ten or fifteen years he has been a librarian for the Santa Cruz Public Library.
Way to go, GM Tarjan!
With all the big issues going on nowadays, a lot of this seems pretty trivial. Still, I find things like this interesting, so I put them on my blog.
- 'We need more': Fight against Ebola virus thin on the groundSo there must be a cast of thousands in there, deploying equipment, medications and vaccines, and dispensing advice, right?
- How we treat EbolaWhen Ebola haemorrhagic fever broke out recently in Guinea, West Africa, MSF set up three specialised treatment centres in the worst-hit areas. Ebola is so infectious -- and so deadly -- that patients need to be treated in isolation by staff wearing special protective clothing. Emergency coordinator Henry Gray and logistician Pascal Piguet, both just back from Guinea, explain why, with Ebola, every little detail counts.
- NATO's Underground Roman Super-QuarryThere is an underground Roman-era quarry in The Netherlands that, when you exit, you will find that you have crossed an invisible international border somewhere down there in the darkness, and that you are now stepping out into Belgium; or perhaps it's the other way around, that there is an underground Roman-era quarry in Belgium that, when you exit, you will find that you have crossed an invisible international border somewhere down there in the darkness, and that you are now stepping out into The Netherlands.
- Life on the Subsurface: An Interview with Penelope BostonBoston has worked with the NASA Innovative Advanced Concepts program (NIAC) to develop protocols for both human extraterrestrial cave habitation and for subterranean life-detection missions on Mars, life which she believes is highly likely to exist.
- Living up to Your (Business) Ideals If you want to live up to your business ideals, you have to take the time to authentically identify your values, the things you care about. You also have to commit to the ongoing tending and cultivation of those values in your organization. It is not a “set it and forget it” scenario.
- Being ProfitableMaybe time is the most important factor for you. How much can everyone work? How much does everyone want to work? How much must you then charge for that time to end up with salaries you can be content with?
- git Flight RulesFlight Rules are the hard-earned body of knowledge recorded in manuals that list, step-by-step, what to do if X occurs, and why. Essentially, they are extremely detailed, scenario-specific standard operating procedures.
- Hybrid Logical ClocksPhysical Time (PT) leverages on physical clocks at nodes that are synchronized using the Network Time Protocol (NTP). PT also has several drawbacks. Firstly, in a geographically distributed system obtaining precise clock synchronization is very hard; there will unavoidably be uncertainty intervals. Secondly, PT has several kinks such as leap seconds and non-monotonic updates. And, when the uncertainty intervals are overlapping, PT cannot order events and you end up with inconsistent snapshots as the one shown below.
- A Quick Introduction to CoreOSCoreOS, in case you haven’t heard of it, is a highly streamlined Linux distribution designed with containers, massive server deployments, and distributed systems/applications in mind.
- Touch events on the pathfinding pagesFor my pathfinding pages I wanted to support "painting" on the map to make or erase walls. When you change the map, the pathfinding algorithm updates the paths.
- Tom Brady is the loneliest quarterback on the planetI thought, "the three-time Super Bowl winner and one of his wide receivers trying to high-five and missing each other's hands? That's pretty funny!" Oh no. What is funnier still is Brady trying to high-five one or more of his teammates and the other players totally ignoring him. What's even funnier than that? This has happened over and over again.
I have a build script that may, as a matter of convenience, download and build a third-party software package. Before the build script goes into any release, I want to tighten up its security to ensure it verifies the PGP signature on the package.
OK, I can do that in a Makefile using two separate targets: the tarball, and the verified tarball. I thought I could make the latter a link to the former, using something like:
gpg –verify $(TARBALL).asc $(TARBALL) \
&& ln $(TARBALL) $(TARBALL-VERIFIED) \
|| (echo “### Please verify $(TARBALL) ###” && exit 1)
However, this is failing me, because gpg is too trusting:
$ gpg –verify nginx-1.7.3.tar.gz.asc nginx-1.7.3.tar.gz
gpg: Signature made Tue 8 Jul 14:22:56 2014 BST using RSA key ID A1C052F8
gpg: Good signature from “Maxim Dounin <email.suppressed>”
gpg: WARNING: This key is not certified with a trusted signature!
gpg: There is no indication that the signature belongs to the owner.
Primary key fingerprint: B0F4 2533 73F8 F6F5 10D4 2178 520A 9993 A1C0 52F8
$ echo $?
(OK, now you know the identity of $(TARBALL))
It has not verified that the signature is trusted, but it still thinks all’s well. Ouch! I can verify the signature manually (if rather weakly) but I’d rather not try to script that. Nor do I want to concern myself with issues that might change with each new nginx release, or with changes of pgp keys.
A bit of googling finds this message, from which it appears this was a known bug but fixed in GnuPG version 22.214.171.124 back in 2006 (and yes, my gpg version is more recent than that)! Was that a non-fix that only tells you if it’s a BAD signature or no PGP data at all? That would be no more useful than an MD5 or SHA checksum!
OK folks, what am I missing? What do you use to script the verification of a package?
Lovely print! Shipping would be a bit crazy, though. There has to be an english-language print of one of Tove Jansson’s maps on sale somewhere in Europe…
I’ve been thinking some more on the past, present and future of documents. I don’t know exactly where this post will end up, but I think this will help me clarify some of my own thoughts.
First, I think technology has clouded our thinking and we’ve been equivocating with the term “document”, using it for two entirely different concepts.
One concept is of the document as the way we do work, but not an end-in-itself. This is the document as a “collaboration surface”, short-lived, ephemeral, fleeting, quickly created and equally quickly forgotten.
For example, when I create a few slides for a project status report, I know that the presentation document will never be seen again, once the meeting for which it was written has ended. The document serves as a tool for the activity of presenting status, of informing. Twenty years ago we would have used transparencies (“foils”) or sketched out some key points on a black board. And 10 years from now, most likely, we will use something else to accomplish this task. It is just a coincidence that today the tools we use for this kind of work also act like WYSIWYG editors and can print and save as “documents”. But that is not necessary, and historically was not often the case.
Similarly, take a spreadsheet. I often use a spreadsheet for a quick ad-hoc “what-if” calculation. Once I have the answer I am done. I don’t even need to save the file. In fact I probably load or save a document only 1 in 5 times that I launch the application. Some times people use a spreadsheet as a quick and dirty database. But 20 years ago they would have done these tasks using other tools, not document-oriented, and 10 years from now they may use other tools that are equally not document related. The spreadsheet primarily supports the activity of modeling and calculating.
Text documents have myriad collaborative uses today, but other tools have emerged as well . Collaboration is moved to other non-document interfaces, tools like wikis, instant messaging, forums, etc. Things that would have required routing a typed inter-office memo 50 years ago are now done with blog posts.
That’s one kind of document, the “collaboration surface”, the way we share ideas, work on problems, generally do our work.
And then there is a document as the record of what we did. This is implied by the verb “to document”. This use of documents is still critical, since it is ingrained in various regulatory, legal and business processes. Sometimes you need “a document.” It won’t do to have your business contract on a wiki. You can’t prove conformance to a regulation via a Twitter stream. We may no longer print and file our “hard” documents, but there is a need to have a durable, persistable, portable, signable form of a document. PDF serves well for some instances, but not in others. What does PDF do with a spreadsheet, for example? All the formulas are lost.
This distinction, between these two uses of documents, seems analogous to the distinction between Systems of Engagement and Systems of Record, and can be considered in that light. It just happens that each concept happened to use the same technology, the same tools, circa the year 2000, but in general these two concepts are very different.
The obvious question is: What will the future being? How quickly does our tool set diverge? Do we continue with tools that compromise, hold back collaborative features because they must also serve as tools to author document records? Or do we unchain collaborative tools and allow them to focus on what they do best?
- ODF TC Creates Advanced Document Collaboration Subcommittee
- Document Migrations
- Fast Track versus PAS
After months and months of work, my colleagues and I at 6Wunderkinder launched Wunderlist 3 yesterday. It’s an update that looks fairly straightforward on the surface, but which actually required a complete rewrite of most of the backend infrastructure and much of the client code. In many ways, it’s our Snow Leopard release. It was necessary, painful, and—I think it’s safe to say—we’re all quite happy that it’s finally out in the world and we’re going to be shifting into a mode where we can pick up the public release cycle and start releasing much more frequently.
Mark Miller: "I prefer to encourage refactoring as an opportunistic activity, done whenever and wherever code needs...
- Martin Fowler