Planet Apache

Syndicate content
Updated: 1 day 7 hours ago

Bruce Snyder: Car vs. Bike in Boulder, Colorado :: Bruce Snyder's Status

Wed, 2014-10-08 15:35

On Thursday, April 24, 2014, I was in a very serious cycling accident in Boulder, Colorado while riding my new Cervélo S3 during the lunch hour and I am currently hospitalized in Denver, Colorado for at least the next 60 days. Damage ReportIn the wreckage, I suffered 11 fractured ribs (10 on the left side, most in multiple places, and one on the right), fractures of the L3 and L4 spinal vertebrae, one collapsed/punctured lung, one deflated lung, a nasty laceration on my left hip that required stitches and loads of road rash all over my back and left hip from being run over. The worst part was being conscious through the entire ordeal, i.e., I knew I was being run over by a car.
Current Status After undergoing emergency spinal surgery involving the insertion of hardware including mounting hardware, rods and screws to create a fusion from the L2 - L5 vertebrae and cleaning out much debris from various spinal process fractures that punctured the spinal dura, I am now able to control everything from the knees up. I do feel my feet and I can distinguish sharp vs. dull touches in some areas but I am not able to flex my feet/ankles or wiggle my toes at this time. After the surgery, I was in the Intensive Care Unit (ICU) at Boulder Community Hospital for eight days. Since this time, I have been transferred to Craig Hospital in Denver, Colorado. Craig is a world-renowned hospital for it's spinal and brain injury rehabilitation programs.

(For those who are curious, I'm told that the bike was left almost untouched. But I will certainly have my family take it to Excel Sports in Boulder to be fully evaluated. )

A Very Special Thank You There is one guy who deserves a special thank you for his compassion for a stranger in distress. Gareth, your voice rescued me and got me through the initial accident and your clear thinking helped me more than you will ever know. After you visited me at the Boulder Hospital, I totally fell apart just because I heard your voice again. We will meet again, my brother. 
Also, a special thank you to Mike O. for introducing Gareth and I after the accident. Thanks, buddy. To My Family and Friends My wife Janene has truly been my rock through this entire ordeal. Never did she waver and, for me, the sun rises and sets with her. She and my girls have given me such strength when I needed it most. I am truly blessed with my family and friends.

My brother, Michael, was like a sentry -- by my side, from early morning until late into the night, supporting me in any way he could. I love you, Michael!

The moment my brother, my parents and my in-laws received the news of the accident, they packed their cars and hauled ass through the night 1000 miles to be by my side. I love you all so much and I could not have gotten this far without you. You are all amazing!

Thank you to my close friends for whom this experience only brought us closer. Karen, Dan, Anna, Sarah, and Sasha, I love you guys! Filip, you are very special to me and your dedication to visiting me and helping me keep my spirits high is stellar, thank you! Mike O., the chicken curry was delicious! Who knew this dude can cook *and* write code, thank you! Tim R., you have been my cycling buddy for a number of years and we were riding together the day of the accident just prior to its occurrence. You've stayed by me and met my family and helped in any way you can, thank you!

To all my friends and neighbors who immediately mobilized to provide my family with more delicious meals than they could possibly keep up with eating, you have really made us feel loved and watched over. Not only has Louisville, Colorado been ranked as one of the best places to live in the USA by Money Magazine for the last several years, the community of friends and neighbors is like an extended family -- you guys are the best!
Thank You To Everyone Thank you for all of your phone calls, emails, texts, tweets, concerns, hospital visits and well-wishes from everyone around the world. The level of compassion that I have experienced from near and far has been absolutely overwhelming for me. Please understand that I am not able to communicate directly with every single person simply due to the sheer volume of communications and the amount of time I am now spending doing rehab at Craig Hospital. I still get exhausted fairly easily doing rehab and just trying to live my life right now at Craig Hospital -- and this is coming from someone who could easily run 10 miles or ride 30+ miles at lunch just a couple weeks ago. This whole experience is absolutely flooding me emotionally and physically.

The Gory Details For those who want more details as the shit went down, please see the Caring Bridge :: Bruce Snyder website set up my extraordinary friend Jamie Hogan. This website is where Janene has been posting updates about my experience since the beginning. I will be adding my experiences henceforth here on my blog as I travel the winding road of recovery.

Conclusion Life is precious and I am so very happy to be alive.

And please, please do not ever text and drive. 
Categories: FLOSS Project Planets

Colm O hEigeartaigh: Some recent WS-Trust client topics in Apache CXF

Wed, 2014-10-08 08:50
There are a number of minor new features and changes in recent versions of Apache CXF with respect to the client side of WS-Trust, which will be documented in this post.

1) STSClient configuration

CXF's STSClient is responsible for communicating with a Security Token Service (STS) via the WS-Trust protocol, in order to issue/validate/renew/etc. a security token. To support WS-Trust on the client side in CXF, it is necessary to construct an STSClient instance, and then reference it via the JAX-WS property key "ws-security.sts.client". Here is a typical example in spring.

However, there are some alternatives to configuring an STSClient per JAX-WS client object (see the CXF documentation for additional information). Strictly speaking these alternatives have been available in previous versions of CXF, however some bugs were fixed in the latest releases to enable them to work properly:

a) If no STSClient is directly configured on the JAX-WS client, then the CXF runtime will look for an STSClient bean with a name that corresponds to the Endpoint name with the suffix ".sts-client". Here is an example:

<bean id="stsClient" class="org.apache.cxf.ws.security.trust.STSClient"
    name="{http://www.example.org/contract/DoubleIt}DoubleItTransportSAML1Port.sts-client"
    abstract="true"
>

b) If no STSClient is configured either directly on the client, or else via the approach given in (a) above, then the security runtime tries to fall back to a "default" client. All that is required here is that the name of the STSClient bean should be "default.sts-client". Here is an example:

<bean id="stsClient" class="org.apache.cxf.ws.security.trust.STSClient"
    name="default.sts-client"
    abstract="true"
>

2) Falling back to Issue after a failed Renew

When a cached SecurityToken is expired, the STSClient tries to renew the token. However, not all STS instances support the renewal binding of WS-Trust. Therefore a new configuration parameter was introduced:
  • SecurityConstants.STS_ISSUE_AFTER_FAILED_RENEW ("ws-security.issue.after.failed.renew") -  Whether to fall back to calling "issue" after failing to renew an expired token. The default is "true".
 3) Renewing security tokens that are "about to" expire

There is a potential issue when a cached security token is "about to" expire. The CXF client will retrieve the cached token + check to see whether the token is expired or not. If it is expired, then it renews the token. If the token is valid, then it uses it in the service request. However, if the security token expires "en route" to the service, then the service will reject the token, and the service invocation will fail.

In CXF 2.7.13 and 3.0.2, support has been added to forcibly renew tokens that are about to expire, rather than risk letting them expire en route. A new configuration parameter has been introduced:
  • SecurityConstants.STS_TOKEN_IMMINENT_EXPIRY_VALUE ("ws-security.sts.token.imminent-expiry-value") - The value in seconds within which a token is considered to be expired by the client, i.e. it is considered to be expired if it will expire in a time less than the value specified by this tag.
The default value for this parameter for CXF 3.0.2 is "10", meaning that if a security token will expire in less than 10 seconds, it will be renewed by the client. For CXF 2.7.13, the default value is "0" for backwards compatibility reasons, meaning that this functionality is disabled.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-07

Tue, 2014-10-07 18:58
Categories: FLOSS Project Planets

Bryan Pendleton: Linearizable Boogaloo

Tue, 2014-10-07 17:00

This might be the nerdiest video I've ever watched: Jepsen II: Linearizable Boogaloo (although that's a high bar; I watch some very nerdy videos...).

Kingsbury is known online as "Aphyr", which is the location of his superb website.

I got hooked on Kingbury's writing nearly 18 months ago, when he published a superb essay with a rather tongue-in-cheek title: Call me maybe: Carly Rae Jepsen and the perils of network partitions

I don't mind the cheekiness, for Kingsbury's analysis, and even more importantly his description skills, are first-rate.

And a certain amount of snark helps the medicine go down, to mis-quote P.L. Travers.

The bottom line is: if you consider yourself to be a server software engineer, or a system programmer, you will find Kingsbury's work both entertaining and educational.

Read his essays; watch his videos; get smarter.

Categories: FLOSS Project Planets

Colm O hEigeartaigh: Apache CXF Authentication and Authorization test-cases III

Tue, 2014-10-07 09:45
This is the third in a series of posts on authentication and authorization test-cases for web services using Apache CXF. The first post focused on authenticating and authorizing web service requests that included a username and password (WS-Security UsernameToken and HTTP/BA). The second article looked at more sophisticated ways of performing authentication and authorization, such as using X.509 certificates, using a SecurityTokenService (STS), using XACML and using Kerberos. This article will build on the previous articles to show how to perform Single Sign On (SSO) with Apache CXF.

The projects are as follows:
  • cxf-shiro: This project uses Apache Shiro for authenticating and authorizating a UsernameToken, as covered in the first article. However, it also now includes an SSOTest, which shows how to use WS-SecureConversation for SSO. In this scenario an STS is co-located with the endpoint. The client sends the UsernameToken to the STS for authentication using Apache Shiro. The STS returns a token and a secret key to the client. The client then makes the service request including the token and using the secret key to sign a portion of the request, thus proving proof-of-possession. The client can then make repeated invocations without having to re-authenticate the UsernameToken credentials.
  • cxf-sts:  This project shows how to use the CXF SecurityTokenService (STS) for authentication and authorization, as covered in the second article. It now includes an SSOTest to show how to achieve SSO with the STS. It demonstrates how the client caches the token after the initial invocation, and how it can make repeated invocations without having to re-authenticate itself to the STS.
  • cxf-saml-sso: This project shows how to leverage SAML SSO with Apache CXF to achieve SSO for a JAX-RS service. CXF supports the POST + redirect bindings of SAML SSO for JAX-RS endpoints. As part of this demo, a mock CXF-based IdP is provided which authenticates a client using HTTP/BA and issues a SAML token using the CXF STS. Authorization is also demonstrated using roles embedded in the issued SAML token. 
  • cxf-fediz-federation-sso: This project shows how to use the new CXF plugin of Apache Fediz 1.2.0 to authenticate and authorize clients of a JAX-RS service using WS-Federation. This feature will be documented more extensively at a future date, and is considered experimental for now. Please play around with it and provide feedback to the CXF users list.
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-06

Mon, 2014-10-06 18:58
  • Reddit’s crappy ultimatum to remote workers and offices

    Reddit forces all remote workers (about half the workforce, in SLC and NYC) to move to SF, provoking a shitstorm:

    In a tweet confirming the move, Reddit’s CEO justified his treatment of non-San Francisco workers with a push for Optimal Teamwork to drive the New And $50M Improved Reddit forward. I shit you not. That was the actual term! (I added the New & Improved fan fiction here). So let’s leave aside the debate over whether working remotely is as efficient as being in the same office all the time. Let’s just focus on the size of the middle finger given to the people who work at Reddit outside the Bay Area, given the choice of forced, express relocation or a pink slip. How optimal do you think these employees will feel about leadership and the rest of the team going forward? Do you think they’ll just show up at the new, apparently-not-even-in-San-Francisco-proper office with a smile from ear to ear, ready to begin in earnest on Optimal Teamwork, left-behind former colleagues be damned?

    (tags: telecommuting reddit working remote-working ceos optimal-teamwork teamwork relocation)

  • Space Jacket

    ‘I designed this jacket as a tribute to the continuing legacy of American spaceflight. I wanted it to embody everything I loved about the space program, and to eventually serve as an actual flight jacket for present-day astronauts on missions to the ISS (International Space Station). There are other “replica” flight jackets made for space enthusiasts, but I decided to come up with something boldly different, yet also completely wearable and well-suited for space.’

    (tags: space clothing fashion geekery jackets)

  • How did Twitter become the hate speech wing of the free speech party?

    Kevin Marks has a pretty good point here:

    Your tweet could win the fame lottery, and everyone on the Internet who thinks you are wrong could tell you about it. Or one of the “verified” could call you out to be the tribute for your community and fight in their Hunger Games. Say something about feminism, or race, or sea lions and you’d find yourself inundated by the same trite responses from multitudes. Complain about it, and they turn nasty, abusing you, calling in their friends to join in. Your phone becomes useless under the weight of notifications; you can’t see your friends support amongst the flood. The limited tools available – blocking, muting, going private – do not match well with these floods. Twitter’s abuse reporting form takes far longer than a tweet, and is explicitly ignored if friends try to help.

    (tags: harassment twitter 4chan abuse feminism hate-speech gamergate sea-lions filtering social-media kevin-marks)

  • Mnesia and CAP

    A common “trick” is to claim: ‘We assume network partitions can’t happen. Therefore, our system is CA according to the CAP theorem.’ This is a nice little twist. By asserting network partitions cannot happen, you just made your system into one which is not distributed. Hence the CAP theorem doesn’t even apply to your case and anything can happen. Your system may be linearizable. Your system might have good availability. But the CAP theorem doesn’t apply. [...] In fact, any well-behaved system will be “CA” as long as there are no partitions. This makes the statement of a system being “CA” very weak, because it doesn’t put honesty first. I tries to avoid the hard question, which is how the system operates under failure. By assuming no network partitions, you assume perfect information knowledge in a distributed system. This isn’t the physical reality.

    (tags: cap erlang mnesia databases storage distcomp reliability ca postgres partitions)

  • Integrating Kafka and Spark Streaming: Code Examples and State of the Game

    Spark Streaming has been getting some attention lately as a real-time data processing tool, often mentioned alongside Apache Storm. [...] I added an example Spark Streaming application to kafka-storm-starter that demonstrates how to read from Kafka and write to Kafka, using Avro as the data format and Twitter Bijection for handling the data serialization. In this post I will explain this Spark Streaming example in further detail and also shed some light on the current state of Kafka integration in Spark Streaming. All this with the disclaimer that this happens to be my first experiment with Spark Streaming.

    (tags: spark kafka realtime architecture queues avro bijection batch-processing)

  • Mandos

    ‘a system for allowing servers with encrypted root file systems to reboot unattended and/or remotely.’ (via Tony Finch)

    (tags: via:fanf mandos encryption security server ops sysadmin linux)

  • Zonify

    ‘a set of command line tools for managing Route53 DNS for an AWS infrastructure. It intelligently uses tags and other metadata to automatically create the associated DNS records.’

    (tags: zonify aws dns ec2 route53 ops)

  • Mike Perham on Twitter: “Sweet, monit just sent a DMCA takedown notice to @github to remove Inspeqtor.”

    ‘The work, Inspeqtor which is hosted at GitHub, is far from a “clean-room” implementation. This is basically a rewrite of Monit in Go, even using the same configuration language that is used in Monit, verbatim. a. [private] himself admits that Inspeqtor is “heavily influenced“ by Monit https://github.com/mperham/inspeqtor/wiki/Other-Solutions. b. This tweet by [private] demonstrate intent. https://twitter.com/mperham/status/452160352940064768 “OSS nerds: redesign and build monit in Go. Sell it commercially. Make $$$$. I will be your first customer.”’ IANAL, but using the same config language does not demonstrate copyright infringement…

    (tags: copyright dmca tildeslash monit inspeqtor github ops oss agpl)

  • YOU AND YOUR DAMNED GAMES, JON STONE — Why bother with #gamergate?

    So what is #gamergate? #gamergate is a mob with torches aloft, hunting for any combustible dwelling and calling it a monster’s lair. #gamergate is a rage train, and everyone with an axe to grind wants a ride. Its fuel is a sour mash of entitlement, insecurity, arrogance and alienation. #gamergate is a vindication quest for political intolerance. #gamergate is revenge for every imagined slight. #gamergate is Viz’s Meddlesome Ratbag.

    (tags: gamergate culture gaming 4chan mobs feminism)

Categories: FLOSS Project Planets

Christian Grobmeier: JArchitect 4.0

Mon, 2014-10-06 17:00

Last year around the same time I checked out JArchitect for the first time. I concluded that JArchitect was a useful, but also an ugly companion. Lucky for me, I got the chance to check out the new big version of JArchitect, which was really a giant step forward in terms of usability.

Categories: FLOSS Project Planets

David Reid: Arizona Road Trip, Day 2

Mon, 2014-10-06 13:56
The Plan

From Phoenix, AZ we planned to head to the Petrified Forest National Park before continuing to Chinle, AZ.

The Route

Our route was to take the AZ-101 Loop East from north Scottsdale to E Shea Boulevard. We took E Shea Boulevard east until reaching AZ-87 North to the intersection with AZ-260 E. When we reached the intersection with AZ-277N we turned left and continued north for around 7 miles before turning onto AZ-377N to Holbrook. After lunch in Holbrook we then followed the signs to Petrified Forest entering via the southern gate.

From the Petrified Forest we were going to the I-40 E and then US 191 N to Chinle.

The Day

The weather was terrible when we left Phoenix. Much of the road from the hotel to the AZ-101 Loop was covered in brown water. The rain had started overnight and regular flash flood alerts from our mobiles seemed to suggest it wasn’t stopping soon. In fact as we drove the reports on the radio suggested it was very unusual and we would later hear it referred to as the “hundred year storm“.

Our choice of car suddenly seemed like a good one as the 4 wheel drive and the additional ground clearance made the drive easier than it could have been.

Throughout the drive to Holbrook we were dogged by rain and overcast skies, but as we approached Holbrook the sun appeared and the rain stopped. After a quick lunch in Holbrook we drove to the south gate of the Petrified Forest National Park with the idea of driving north through the park and exiting onto I-4o.

Pictures from our trip through the park are online here.

The pictures we had seen before we travelled didn’t do full justice to the park. It’s a fantastic location with truly amazing scenery and seems surrounded by the phenomenal horizons that the area is famous for. Throughout our visit the sky was overcast with only occasional bursts of sunshine, so my pictures don’t really capture the colours or atmosphere.

The northern exit from the park is at the Painted Desert Visitor Centre and from there we took the I-40E to US-191N onwards to Chinle where we stayed at the Holiday Inn just outside the entrance to Canyon de Chelly.

Not long after we left I-40 our mobile phone service stopped, and this was the case until arriving in Page, several days later! Apparently if you have an international mobile this is the case in Navajo territory as the mobile provider does not have any international agreements. While we routinely had WiFi, the lack of easy mobile use did prove to be inconvenient on a few occasions.

 

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-05

Sun, 2014-10-05 18:58
  • ‘In 1976 I discovered Ebola, now I fear an unimaginable tragedy’ | World news | The Observer

    An interview with the scientist who was part of the team which discovered the Ebola virus in 1976:

    Other samples from the nun, who had since died, arrived from Kinshasa. When we were just about able to begin examining the virus under an electron microscope, the World Health Organisation instructed us to send all of our samples to a high-security lab in England. But my boss at the time wanted to bring our work to conclusion no matter what. He grabbed a vial containing virus material to examine it, but his hand was shaking and he dropped it on a colleague’s foot. The vial shattered. My only thought was: “Oh, shit!” We immediately disinfected everything, and luckily our colleague was wearing thick leather shoes. Nothing happened to any of us.

    (tags: ebola epidemiology health africa labs history medicine)

Categories: FLOSS Project Planets

Christian Grobmeier: Webcamp Zagreb 2014

Sun, 2014-10-05 17:00

Last weekend I was attending the WebCamp Zagreb.

Categories: FLOSS Project Planets

Bryan Pendleton: Derby NetworkServer SocketPermission

Sun, 2014-10-05 11:48

Last winter, there was a fairly large and complex Java update: Java™ SE Development Kit 7, Update 51 (JDK 7u51).

There's lots to read in that announcement, but this part particularly affects users of the Derby Network Server:

Change in Default Socket Permissions

The default socket permissions assigned to all code including untrusted code have been changed in this release. Previously, all code was able to bind any socket type to any port number greater than or equal to 1024. It is still possible to bind sockets to the ephemeral port range on each system. The exact range of ephemeral ports varies from one operating system to another, but it is typically in the high range (such as from 49152 to 65535). The new restriction is that binding sockets outside of the ephemeral range now requires an explicit permission in the system security policy.

Most applications using client tcp sockets and a security manager will not see any problem, as these typically bind to ephemeral ports anyway. Applications using datagram sockets or server tcp sockets (and a security manager) may encounter security exceptions where none were seen before. If this occurs, users should review whether the port number being requested is expected, and if this is the case, a socket permission grant can be added to the local security policy, to resolve the issue.

See 8011786 (not public).

For users of Derby, this causes the symptoms described by DERBY-6438.

There is an (Oracle, and picked up by IBM) JVM security change that requests or suggests removal or limitation of the 'range of ports' on which JVMS by default grant the "listen" permission. I cannot find details about this JVM change, but as a result of it, users that have (unknowingly) relied on this in the past will now have to modify their policy files, or Network Server will no longer work.

Happily, it's not terribly hard to modify your Java security policy to allow Derby to run again: Unable to start derby database from Netbeans 7.4

For reason of java.policy is an unix style file and read-only, I opened and edited it with notepad++ and executed as administrator (under the same java home):
C:\Program Files\Java\jdk1.7.0_51\jre\lib\security\java.policy
Add only these lines into the file after the first grant:
grant {
permission java.net.SocketPermission "localhost:1527", "listen";
};
Save the file, which is a little tricky for reason of the permission. But if you run notepad++ or any other edit program as administrator, you can solve the problem.

In older versions of Java, that security file tended to read something like:


// allows anyone to listen on un-privileged ports
permission java.net.SocketPermission "localhost:1024-", "listen";

And that's why Derby used to run successfully with those older Java versions.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-04

Sat, 2014-10-04 18:58
  • Ebola vaccine delayed by IP spat

    This is the downside of publicly-funded labs selling patent-licensing rights to private companies:

    Given the urgency, it’s inexplicable that one of the candidate vaccines, developed at the Public Health Agency of Canada (PHAC) in Winnipeg, has yet to go in the first volunteer’s arm, says virologist Heinz Feldmann, who helped develop the vaccine while at PHAC. “It’s a farce; these doses are lying around there while people are dying in Africa,” says Feldmann, who now works at the Rocky Mountain Laboratories of the U.S. National Institute of Allergy and Infectious Diseases (NIAID) in Hamilton, Montana. At the center of the controversy is NewLink Genetics, a small company in Ames, Iowa, that bought a license to the vaccine’s commercialization from the Canadian government in 2010, and is now suddenly caught up in what WHO calls “the most severe acute public health emergency seen in modern times.” Becker and others say the company has been dragging its feet the past 2 months because it is worried about losing control over the development of the vaccine.

    (tags: ip patents drugs ebola canada phac newlink-genetics health epidemics vaccines)

  • sferik/t

    “A command-line power tool for Twitter.” It really is — much better timeline searchability than the “real” Twitter UI, for example

    (tags: twitter ruby github cli tools unix search)

  • Ebola: While Big Pharma Slept

    We’ve had almost 40 years to develop, test and stockpile an Ebola vaccine. That has not happened because big pharma has been entirely focused on shareholder value and profits over safety and survival from a deadly virus. For the better part of Ebola’s 38 years ? big pharma has been asleep. The question ahead is what virus or superbug will wake them up?

    (tags: pharma ebola ip patents health drugs africa research)

Categories: FLOSS Project Planets

Bryan Pendleton: Summertime reading

Sat, 2014-10-04 12:54

Boy, I'm all over the place recently.

Must be the 90 degree temperatures.

Or all the candy corn I've been eating...

  • Play the BackstaySimply put, the backstay can bend the mast when tightened and straighten the mast when loose. Looking specifically at the mainsail, this can affect the fullness or draft as well as the twist of the mainsail.
  • PCC: Re-architecting Congestion Control for Consistent High Performance The design rationale behind TCP’s hardwired mapping is to make assumptions about the packet-level events. When seeing a packet-level event, TCP assumes the network is in a certain state and thus tries to optimize the performance by triggering a predefined control behavior as response to that assumed network state. However, in real network, the observed packet-level events are often not a result of the assumed network condition. When this assumed link breaks, TCP still mechanically carries out the mismatched control response and severely degraded performance follows.
  • That's not an unreasonable approach.When I came up with the original approach to congestion control in TCP 30 years ago (see Internet RFCs 896 and 970), TCP behavior under load was so awful that connections would stall out and fail. The ad-hoc solutions we put in back then at least made TCP well behaved under load.
  • Why is 0x00400000 the default base address for an executable?In order to make context switching fast, the Windows 3.1 virtual machine manager "rounds up" the per-VM context to 4MB. It does this so that a memory context switch can be performed by simply updating a single 32-bit value in the page directory.
  • 'Bloodletting' at Downtown Project with Massive LayoffsFor a time, it has seemed as if the growth would never stop in Downtown Las Vegas: land purchases, new businesses and development at every turn, most of it driven by Downtown Project, the redevelopment group funded by a $350 million investment from Zappos CEO Tony Hsieh.
  • Founder SuicidesAnd yesterday, the suicide article - The Downtown Project Suicides: Can the Pursuit of Happiness Kill You? - appeared. It’s a rough one that talks about three suicides – Jody Sherman (4/13), Ovik Banerjee (1/14), and Matt Berman (4/14) – all people involved in the Vegas Tech phenomenon.
  • A State of Xen - Chaos Monkey & Cassandra“When we got the news about the emergency EC2 reboots, our jaws dropped. When we got the list of how many Cassandra nodes would be affected, I felt ill. Then I remembered all the Chaos Monkey exercises we’ve gone through. My reaction was, “Bring it on!”.” - Christos Kalantzis - Engineering Manager, Cloud Database Engineering
  • Apache Derby 10.11.1.1 releasedThe Apache Derby project is pleased to announce feature release 10.11.1.1.
  • After raising $50M, Reddit forces all remote workers to relocate to SFWong chimed in on Twitter with confirmation of the new employee policy, which he said was decided independent of the new investment. “Intention is to get whole team under one roof for optimal teamwork. Our goal is to retain 100 percent of the team,” he said.
  • reddit it’s always bothered me that users create so much of the value of sites like reddit but don’t own any of it. So, the Series B Investors are giving 10% of our shares in this round to the people in the reddit community, and I hope we increase community ownership over time. We have some creative thoughts about the mechanics of this, but it’ll take us awhile to sort through all the issues. If it works as we hope, it’s going to be really cool and hopefully a new way to think about community ownership.
  • Before the StartupStartups are very counterintuitive. I'm not sure why. Maybe it's just because knowledge about them hasn't permeated our culture yet. But whatever the reason, starting a startup is a task where you can't always trust your instincts.
  • HFT In My Backyard: The OfficeThe Wavre tower is the third tallest structure in Belgium. Once again, I parked my car from the tower and walked through the fields. Some techs were working at the top of the 250 meter tower, but they looked tiny from my vantage point on the ground. Unlike the Houtem tower, which needs guy wires to remain erect, the Wavre tower is a beautiful “standing structure”
  • A technological solution to best execution and excessive market complexity We believe there is a way for regulation to be simplified and made more powerful at the same time. Trade publication standards can be created to support improved customer choice and to simplify and strengthen the market place. This would replace the need for more complex and costly regulation. Our suggestions apply to both the USA and to Europe but this paper will concentrate on the unique market structure of the U.S. equity markets after Regulation NMS (Reg NMS).
  • How does SQLite work? Part 1: pages!Modifying open source programs to print out debug information to understand their internals better: SO FUN.
  • Peter Thiel's Zero to One Might Be the Best Business Book I've ReadThiel, a founder of PayPal and the data analytics firm Palantir, might be best known for his idiosyncrasies, which helped inspire the character of Peter Gregory in the HBO series Silicon Valley. Indeed, the recipients of Thiel's donations seem torn from the pages of a Philip K. Dick novel: an anti-aging biotech firm, an organization dedicated to building ocean communities underwater, and a foundation that pays teenagers to drop out of college and start new companies. Say what you want about the Thielian future of cyborg teenagers living for 200 years in pressurized cabins under the Caribbean; this is not a man to be faulted for thinking too small.
  • The NSA And MeMore than three decades later, the NSA, like a mom-and-pop operation that has exploded into a global industry, now employs sweeping powers of surveillance that Frank Church could scarcely have imagined in the days of wired phones and clunky typewriters. At the same time, the Senate intelligence committee he once chaired has done an about face, protecting the agencies from the public rather than the public from the agencies.
  • “Not” Neutrality? the reason the interconnect utilization between Level 3 and LEC1 and LEC3 improved is that these LECs forced Netflix to pay them to interconnect directly with them. And as Netflix CEO Reed Hastings has pointed out several times, Netflix didn’t do that because they were taking advantage of a highly competitive Internet marketplace. They did it because they had no choice: all third-party content that LEC broadband users want to see eventually has to go through LEC interconnection points. When the LEC tries to turn these interconnection points into Internet tollbooths there is no alternate path for the content to take to reach the consumers.
  • Thirteen Ways of Looking at Greg Maddux: A World Series RequiemI can't think of Jason today, or of the days leading up to his death, without thinking of Greg Maddux. And I can't think of Maddux without thinking of Verducci. With Maddux's Hall of Fame induction this summer, after having not opened the trunk in years, I cracked it open, brushed the dust off Verducci's article, and found the movie of my memory experiencing technical difficulties.
Categories: FLOSS Project Planets

Adrian Sutton: Don’t Make Your Design Responsive

Fri, 2014-10-03 23:17

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

Categories: FLOSS Project Planets

Adrian Sutton: Don’t Make Your Design Responsive

Fri, 2014-10-03 23:17

Every web based product is adding “responsive design” to their feature lists. Unfortunately in many cases that responsive design is actually making their product much harder to use on a variety of screen sizes instead of easier.

The problem is that common CSS libraries and grid systems, including the extremely popular bootstrap, imply that a design can be made responsive using fixed cut-off points and the design just automatically adjusts. In reality making a design responsive requires tailoring the cut off points so that the design adjusts at the points where it stops working well.

For example, let’s look at the bootstrap documentation and in particular the top menu it uses. The bootstrap documentation gets it right, we can shrink the window down right to the point where the menus only just fit and they stick to the full size design:

Menu remains full-size for as long as it can fit.

If we shrink the window further the menus wouldn’t fit anymore so they correctly switch to the hamburger style:

Menu collapses once it would no longer fit.

That’s the right way to do it. The cutoff points have been specifically tailored for the content. There are other stylistic changes as well as these structural ones – the designer believes that centred text works better for headings on smaller screens for example. That’s fine, they’re fairly arbitrary design decisions based on what the designer believes looks best. I’m focussed on structural issues.

To see what happens if we ignore let’s pretend that we add a new menu item:

Our “New Item” shown correctly when the window is wide enough.

But now when we shrink the window down, the breakpoint is in the wrong place:

The “New Item” menu no longer fits but causes incorrect wrapping because the break point is wrong.

Now the design breaks as we shrink the window because the break point hasn’t been specifically tailored to the content. This is the type of error that regularly happens when people think that a responsive grid system can automatically make their site responsive. The repercussions in this case aren’t particularly bad, but it can be significantly worse.

Recently Jenkins released a rewrite of their UI moving it to using bootstrap which has unfortunately gotten responsive design completely wrong (and sadly I’m yet to see anything it’s actually improved). Browser widths that used to work perfectly well with the desktop only site are now treated as mobile browsers and content wraps into a long column. What’s worse, the most useless content, the sidebar, is what’s shown at the top with the main content pushed way down the page. At other widths the design doesn’t fit but doesn’t wrap leaving some links completely inaccessible.

It would be much better if people stopped jumping on the responsive design bandwagon and just designed their site to work for desktop browsers unless they are prepared to fully invest and do responsive design right. Mobile browsers are designed to work well with sites designed for desktop and have lots of tools and techniques for coping with them. As you add responsive design and other adjustments designed for mobile you take responsibility for making the design work well everywhere from the browser onto yourself.  Using predefined breakpoints from a CSS library is unlikely to give the result you intend. It would be nice if CSS libraries stopped claiming that it will.

Categories: FLOSS Project Planets

Chris Hostetter: Pivot Facets: Inside and Out

Fri, 2014-10-03 13:02

Solr 4.10 will be released any minute now, and with it comes the much requested distributed query support for Pivot Faceting (aka: SOLR-2894). Today we have a special guest post from 4 folks at CareerBuilder who helped make distributed Pivot Faceting a reality: Trey Grainger, Brett Lucey, Andrew Muldowney, and Chris Russell.

What are Pivot Facets?

If you’ve used Lucene / Solr in the past, you are most likely familiar with faceting, which provides the ability to see the aggregate counts of search results broken down by a specific category (often the list of values within a specific field). For example, if you were running a search for restaurants (example docs), you could get a list of top 3 cities with restaurants boasting 4- or 5-star ratings with the following request:

Query:

/select?q=*:* &fq=rating:[4 TO 5] &facet=true &facet.limit=3 &facet.mincount=1 &facet.field=city

Results:

{ ... "facet_counts":{ ... "facet_fields":{ "city":[ "Atlanta",4, "Chicago",3, "New York City",3]} ... }

This is a very fast and flexible way to provide real-time analytics (with ad hoc querying abilities through the use of keywords), but in any reasonably-sophisticated analytics system you will want to analyze your data in multiple dimensions. With the release of Solr 4.0, a new feature was introduced which allowed not just breaking down the values by a single facet category, but also by any additional sub-categories. Using our restaurants example, let’s say we wanted to see how many 4- and 5-star restaurants exist in the top cities, in the top 3 U.S. states. We can accomplish this through the following request (example taken from Solr in Action):

Query:

/select?q=*:* &fq=rating:[4 TO 5] &facet=true &facet.limit=3 &facet.pivot.mincount=1 &facet.pivot=state,city,rating

Results:

{ ... "facet_counts":{ ... "facet_pivot":{ "state,city,rating":[{ "field":"state", "value":"GA", "count":4, "pivot":[{ "field":"city", "value":"Atlanta", "count":4, "pivot":[{ "field":"rating", "value":4, "count":2}, { "field":"rating", "value":5, "count":2}]}]}, { "field":"state", "value":"IL", "count":3, "pivot":[{ "field":"city", "value":"Chicago", "count":3, "pivot":[{ "field":"rating", "value":4, "count":2}, { "field":"rating", "value":5, "count":1}]}]}, { "field":"state", "value":"NY", "count":3, "pivot":[{ "field":"city", "value":"New York City", "count":3, "pivot":[{ "field":"rating", "value":5, "count":2}, { "field":"rating", "value":4, "count":1}]}]} ... ]}}}

This example demonstrates a three-level Pivot Facet, as defined by the “facet.pivot=state,city,rating” parameter. This allows for interesting analytics capabilities in a single request without requiring you to re-execute the query multiple times to generate facet counts for each level. If you were searching an index of social networking profiles instead of restaurant reviews, you might instead break documents down by categories like gender, school, school degree, or even a company or job title. By being able to pivot on each of these different kinds of information, you can uncover a wealth of knowledge through exploring the aggregate relationships between your documents.

For full documentation on how to use Pivot Faceting in Solr (including supported request parameters and additional examples), checkout the Pivot Faceting section in the Solr Reference Guide.

Implementing “Distributed” Pivot Faceting Support

Faceting in a distributed environment is very complex. Even though Pivot Faceting has been supported in Solr for almost 2 years (since Solr 4.0), it has taken a significant amount of additional engineering to get it working in a distributed environment (like SolrCloud). Here’s a bit of the history of the feature, along with details on the technical implementation for those who are interested in a deeper understanding of the internals.

History

Pivot Faceting began life as SOLR-792, Erik Hatcher wrote the original code to present hierarchical facets in non-distributed environments, which was released in Solr 4.0. This work was later expanded into SOLR-2894 to deal specifically with distributed environments like SolrCloud.

Needing this capability for some of CareerBuilder’s data analytics products, Chris Russell took a stab at applying and integrating an early community-generated version patch. After getting the patch working on CareerBuilder’s version of Solr, the team found that distributed environment support was missing a key feature: the available patch understood the need to merge responses from distributed shards, but it had no code to deal with “refinement” requests to ensure accurate aggregate results were returned. Trey Grainger and Chris Russell began architecting a scalable solution for supporting nested faceting refinement requests, and they eventually handed off his work to Andrew Muldowney to continue implementing. Andrew pulled through the refinement work, getting the patch to a point where it worked accurately in a distributed Solr configuration – albeit slowly.

As CareerBuilder’s demands on this patch increased, we pulled in Brett Lucey to take up the task of improving performance. Brett optimized the refinement logic and data structures to ultimately improve performance by 80x. At this point, Chris Hostetter took up the SOLR-2894 mantle and created a robust test suite which uncovered several bugs that were fixed by the team at CareerBuilder.

The Challenge

Before we get into the details of the implementation, here are some useful terms to be familiar with for the discussion:

  • Term – A specific value from a field
  • Limit – Maximum number of terms to be returned
  • Offset – The number of top facet values to skip in the response (just like paging through search results and choosing an offset of 51 to start on page 2 when showing 50 results per page)
  • Shard – A searchable partition of documents (represented as a Solr “core”) containing a subset of a collection’s index
  • Refinement – The act of asking individual shards for the counts of specific terms that they did not originally return, but which were returned from one or more other shards and subsequently need to be retrieved from all shards for accurate processing. In the context of Distributed Pivot Faceting which contain nested facet levels, the “term” will include a list of parent constraints for any previously processed levels.

When a distributed request is received, the request is distributed across multiple shards, each containing a parition of the collection’s index. Since each shard has a different subset of the index, each shard will respond with the answer that is locally correct based only upon its own data. Work must then be done to collate all these locally correct answers and determine what is globally correct in aggregate, across the entire collection of shards. This process (known as refinement) requires asking each shard for counts of specific terms found in other shards.

In traditional single-level faceting there is only one round of refinement. The collated response is examined to determine which terms need to be refined and the requests are sent. Once those requests are answered the collation now has perfect information for the facet values. This is not true for Pivot Facets, however: while each level of a Pivot Facet is only refined once, the information retrieved from those refinements can – and often does – change the terms being examined for refinement on the subsequent levels, which means we need to store the state of values which come back from each shard and intelligently issue refinement requests as needed to calculate accurate numbers for each level. This requires a lot more work. Pivot faceting is expensive: refinement on a multi-tier facet can take a lot of time. We invested quite heavily in getting the right data structures in place to make this process as fast as possible.

The Implementation

When a Distributed Pivot Faceting request is received, the original query is massaged before being passed along to each shard in the Solr collection. During this massaging, the limit is increased by the offset and then the offset is removed from the query. The limit is then increased, or over-requested, in an attempt to minimize refinement (because if we get all the top values back in the first request, there is no need for an additional refinement request for that level).

We now need to keep a complete record of each shard’s response: each shard’s response is saved in an array, and a combined response from all shards is also saved. The combined response is then inspected, and candidates for possible refinement are selected. Refinement candidates fall into two categories: terms within the limit specified and terms that could possibly be within the limit if refined. To determine if a term might fall within the limit if refined, we inspect each shard’s count for the given term. If any shard does not have a count for that term, we then take the lowest count returned by that shard for the respective field. The reason for this is simple: that count is the highest the shard could return for the particular term. If the combined count for the value is large enough that it would be within the limit, we then refine on it. Because each successive level is highly dependent on the refinement of the preceding level, we do not move to subsequent levels until all of the preceding level’s refinement requests have been answered. After each level has been refined, the combined result is trimmed; correct limits and offsets are applied to each level and the combined result is converted to the proper output format to be returned.

Making use of Distributed Pivot FacetingHow does CareerBuilder make use of Distributed Pivot Faceting?

Our primary use case is to power CareerBuilder’s Supply & Demand and Compensation data analytics products, which provide deep insights into labor market data. For example, when someone searches for “accountant” (or any other keyword query), we execute a search across a collection of resumes (supply) and a collection of jobs (demand) and facet on the resulting data. We might facet first by a field representing the collection (“supply” or “demand”), and then pivot on interesting information in that data (e.g.. years experience, locations, educational level, etc.). Below is a screenshot of our Supply & Demand product demonstrating labor market trends for accountants in Massachusetts:

At CareerBuilder, we go a step further and pair Distributed Pivot Faceting with SOLR-3583 in order to get percentiles and other statistics on each level of pivoting. For example, we might facet on “supply” data, pivot on “education level” and then get the percentile statistics (25th, 50th, 75th, etc.) on the salary range for job seekers who fall into that category. Here is an example of how we make use of this data in our Compensation Portal for reporting on labor market compensation trends:

Below is a slide from a talk on our use of Solr for analytics at Lucene Revolution 2013 in which Trey Grainger describes how we make use of these kinds of statistics along with Distributed Pivot Faceting (see minutes 21:30 to 26:00 in the video or slides 41-42):

The SOLR-3583 distributed pivot statistics patch has not been committed to Solr, but readers can investigate it if they have a similar use case. An improved version of SOLR-3583 is currently under development (SOLR-6350 + SOLR-6351) which will likely replace SOLR-3583 in the future.

What are the current limitations of Distributed Pivot Faceting?
  • facet.pivot.mincount=0 doesn’t work well in Distributed Pivot Faceting. (SOLR-6329)
  • Pivot Faceting (whether distributed or not) only supports faceting on field values (SOLR-6353)
  • Distributed Pivot Faceting may not work well with some custom FieldTypes. (SOLR-6330)
  • Using facet.* parameters as local params inside of facet.field causes problems in distributed search. (SOLR-6193)
How scalable is Distributed Pivot Faceting?

In general, Pivot Faceting can be expensive. Even on a single non-distributed Solr search, if you aren’t careful about setting appropriate facet.limit parameters at each level of your Pivot Facet, the number of dimensions you are requesting back can grow exponentially and quickly run you out of system resources (creating memory and garbage collection issues). This is particularly true if you set your facet.limit=-1 on a field with many unique values. That being said, when you use the feature responsibly, having the distributed support really enables you to build powerful, scalable analytics products on top of Solr. At CareerBuilder, we have utilized the Distributed Pivot Facet feature successfully on a cluster containing hundreds of millions of full-text documents (jobs and resumes) spread across almost 150 shards with sub-second response times, which is very efficient given the amount of data and processing involved.

With Distributed Pivot Faceting support now in place, there are several exciting new features which we believe will finally be able to see the light of day in Solr. In particular, it should soon be possible to combine different facet types at each level (currently Pivot Facets only support faceting on field values, not functions or ranges), and to also provide additional meta information such as statistics (sums, averages, percentiles, etc.) at each facet level. It is a really exciting time for Solr as it moves towards providing a very robust suite of real-time analytics capabilities which are already being used to power cutting-edge products throughout the marketplace.

Thanks again to Trey, Brett, Andrew, Chris, and their co-workers at CareerBuilder — both for the work they did on this patch, as well as for writing & editing this great article on how Pivot Faceting works, and how it can be used.

-Hoss

The post Pivot Facets: Inside and Out appeared first on Lucidworks.

Categories: FLOSS Project Planets

Jean-Baptiste Onofré: Encrypt ConfigAdmin properties values in Apache Karaf

Fri, 2014-10-03 11:41

Apache Karaf loads all the configuration from etc/*.cfg files by default, using a mix of Felix FileInstall and Felix ConfigAdmin.

These files are regular properties file looking like:

key=value

Some values may be critical, and so not store in plain text. It could be critical business data (credit card number, etc), or technical data (password to different systems, like database for instance).

We want to encrypt such kind of data in the etc/*.cfg files, but being able to use it regulary in the application.

Karaf provides a nice feature for that: jasypt-encryption.

It’s very easy to use especially with Blueprint.

The jasypt-encryption feature is an optional feature, so it means that you have to install it first:

karaf@root()> feature:install jasypt-encryption

This feature provides:

  • jasypt bundle
  • a namespace handler (enc:*) for blueprint

Now, we can create a cfg file containing encrypted value. The encrypted value is “wrapped” in a ENC() function.

For instance, we can create etc/my.cfg file containing:

mydb.url=host:port mydb.username=username mydb.password=ENC(zRM7Pb/NiKyCalroBz8CKw==)

In the Blueprint descriptor of our application (like a Camel route Blueprint XML for instance), we use the “regular” cm namespace (to load ConfigAdmin), but we add a Jasypt configuration using the enc namespace.

For instance, the blueprint XML could look like:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0" xmlns:enc="http://karaf.apache.org/xmlns/jasypt/v1.0.0"> <cm:property-placeholder persistent-id="my" update-strategy="reload"> <cm:default-properties> <cm:property name="mydb.url" value="localhost:9999"/> <cm:property name="mydb.username" value="sa"/> <cm:property name="mydb.password" value="ENC(xxxxx)"/> </cm:default-properties> </cm:property-placeholder> <enc:property-placeholder> <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor"> <property name="config"> <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig"> <property name="algorithm" value="PBEWithMD5AndDES"/> <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/> </bean> </property> </enc:encryptor> </enc:property-placeholder> <bean id="dbbean" class="..."> <property name="url" value="${mydb.url}"/> <property name="username" value="${mydb.username}"/> <property name="password" value="${mydb.password}"/> </bean> <camelContext xmlns="http://camel.apache.org/schemas/blueprint"> <route> ... <process ref="dbbean"/> ... </route> </camelContext> </blueprint>

It’s also possible to use encryption not in ConfigAdmin, directly loading an “external” properties file using the ext blueprint namespace:

<blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0" xmlns:enc="http://karaf.apache.org/xmlns/jasypt/v1.0.0"> <ext:property-placeholder> <ext:location>file:etc/db.properties</ext:location> </ext:property-placeholder> <enc:property-placeholder> <enc:encryptor class="org.jasypt.encryption.pbe.StandardPBEStringEncryptor"> <property name="config"> <bean class="org.jasypt.encryption.pbe.config.EnvironmentStringPBEConfig"> <property name="algorithm" value="PBEWithMD5AndDES"/> <property name="passwordEnvName" value="ENCRYPTION_PASSWORD"/> </bean> </property> </enc:encryptor> </enc:property-placeholder> ... </blueprint>

where etc/db.properties looks like:

mydb.url=host:port mydb.username=username mydb.password=ENC(zRM7Pb/NiKyCalroBz8CKw==)

It’s also possible to use directly the ConfigAdmin in code. In that case, you have to create the Jasypt configuration programmatically:

StandardPBEStringEncryptor enc = new StandardPBEStringEncryptor(); EnvironmentStringPBEConfig env = new EnvironmentStringPBEConfig(); env.setAlgorithm("PBEWithMD5AndDES"); env.setPassword("ENCRYPTION_PASSWORD"); enc.setConfig(env); ...
Categories: FLOSS Project Planets

Bryan Pendleton: There's just not enough time

Thu, 2014-10-02 21:56

I need some rainy days; when it's sunny and beautiful outside I find excuses not to read ...

  • Visual Explanations - Tufte's best book Printed with love, including pages with pasted in cutouts, this timeless book will never go out of date, and is likely to be passed on to future generations.
  • More on Facebook's "Cold Storage"It is this careful, organized scheduling of the system's activities at data center scale that enables the synergistic cost reductions of cheap power and space. It is, or at least may be, true that the Blu-ray disks have a 50-year lifetime but this isn't what matters. No-one expects the racks to sit in the data center for 50 years, at some point before then they will be obsoleted by some unknown new, much denser and more power-efficient cold storage medium (perhaps DNA).
  • Inside the New York Fed: Secret Recordings and a Culture ClashSegarra ultimately recorded about 46 hours of meetings and conversations with her colleagues. Many of these events document key moments leading to her firing. But against the backdrop of the Beim report, they also offer an intimate study of the New York Fed's culture at a pivotal moment in its effort to become a more forceful financial supervisor.
  • Microsoft Closes SVC Microsoft is being pressed by the shift from PC’s as the main platform—where they had an almost monopoly on the OS—to a place where there are many players in the mobile world. This means that they are less able to support research of an open kind.
  • A Perspective on Computing Research ManagementWhen I came to Microsoft in 1991, I had the opportunity to apply them in building the Silicon Valley research lab, although many of the same principles had already characterized Microsoft Research since its founding in 1991.
  • Loyalty Nearly Killed My BeehiveThen, this past spring, disaster struck. The queen wasn’t laying fertilized eggs, and if I didn’t act quickly, the hive would be dead by the end of summer. Thus began a months-long struggle that I only later realized was really about loyalty: mine to the hive, and the hive’s to its queen.
  • The Mysteries of BCL Time Zone DataIn other words, it’s always the time of day that would have occurred locally if there wasn’t a transition – in IANA time zone language, this is a “wall mode” transition, as it tells you the time you’d see on a wall clock exactly when you need to adjust it.
  • Fun (?) with GnuPG If the holder of the key does not do anything, the key becomes expired, and the signatures in the signed tags stops validating. Luckily, the validity of a key can be extended by the holder of the key, and once it is done, the signatures made before the key's original expiration date will continue to validate fine.
  • Eight Epic Failures of Regulating CryptographyIf this sounds familiar, it's because regulating encryption was a monstrous proposal officially declared dead in 2001 after threatening Americans' privacy, free speech rights, and innovation for nearly a decade. But like a zombie, it's now rising from the grave, bringing the same disastrous flaws with it.
  • Report offers ideas for a Boston beset by rising seasA report scheduled to be released Tuesday about preparing Boston for climate change suggests that building canals through the Back Bay neighborhood would help it withstand water levels that could rise as much as 7 feet by 2100. Some roads and public alleys, such as Clarendon Street, could be turned into narrow waterways, the report suggests, allowing the neighborhood to absorb the rising sea with clever engineering projects that double as public amenities.
  • Making better use of dice in gamesIn 2005, Queen Games published Roma from German game designer Stefan Feld. In this two-player game, players assign actions to die faces, and can only activate those actions by spending a die of the matching value.

    Since then, Feld has embarked on a personal crusade to make dice more interesting.

    “I really like dice,” Feld said, but he wanted players to have control of the game. He didn’t want them to win or lose based on simple luck. In most classic games like Monopoly and Risk, that’s exactly what can happen, and often does.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-02

Thu, 2014-10-02 18:58
Categories: FLOSS Project Planets

Edward J. Yoon: Uniqueness

Thu, 2014-10-02 07:49
생활의 달인을 보면, 매회마다 최고의 달인들이 출연한다. 그들의 클로징멘트는 모두가 대부분 유사한데, "돈을 많이 벌어서 앞으로 내 가게를 차리는게 꿈이다"는 것. 그 분야에서 최고가 되었는데 큰 돈은 벌지 못하였나보다. 왜 일까? 그거슨 바로 a lack of uniqueness.

자료구조와 알고리즘을 빠삭하게 꾀고 있는 베스트 프로그래머가 될 필요는 없다. 그래봐야 그냥 우수한 코더가 될 뿐이니까.
Categories: FLOSS Project Planets