FLOSS Project Planets

Recap post To Dantix@reddit (+all): Wth is Community Design anyway?

Planet KDE - Tue, 2014-04-15 01:13

In which I write my first "recap" post about what's been set and what's been talked about and how things work. Just a little how-to for everyone who just joined us! This time it's about Community Design and why it matters so much.


Photo Jencu "Sharing Toys" CC-SA
I've been meaning to write this post for some time - a sort of recap for people who just joined us in what's going on.
Dantix, a reddit poster was a tad miffed that the editable combobox was the wrong size for it's scroll-down arrow. I'm not trying to call the dude out, I'm sure he (or she) is a brilliant person and didn't mean anything mean, it was just a comment and an apt one so no harm no foul!

But it told me that I need to talk more about this Community Design thing.

...
There are three points I really want to take up:

1) Everything shown will be shown from scratch. Nothing hidden.
Now from a marketing perspective that sucks. Lets be honest - we all like "the big reveal" where some designer in a turtleneck pulls back to curtains and go "tadaaa!". We wont do it like that. You will see it when it's just a mass of scribbles all the way to the finished product.
Why is it like that? Because that way everyone can see the process. It gets demystified and something more accessible and open to all. It shows everyone the trick behind it. Design have become a catch-all for "don't bother me, you wouldn't understand" and I don't think thats a healthy attitude for Open Source to adopt.

2) You are expected to join in if you want to. As long as you play nice the toys are for everyone.
This is the big one. Yeah yeah you've heard it before. But its true. No matter how little of an "eye for design" you have, you have it. Comment, post mockups and try to see the cool things people do and spin off that.
I can't promise that you're work will end up being the official theme for Plasma Next - but I can promise that you will influence it. We actually DO listen to comments. To cool ideas especially.
And that's the bit to remember - its always better to contribute than to comment especially if you're comment is "I don't like that". We have some rules and they are essentially: If you post criticism, what it is you don't like, why you don't like, how it could be fixed and propose a fix it's a gold star comment. If you criticize, specify what it is you don't like and how it could be fixed. Thats a silver star comment. If you criticize and specify what it is you don't like ... thats bronze. As long as you do it in a nice and cooperative way it's ok to post. If you can't say exactly what it is you don't like about something AND can't be nice about it - don't post.
If you have a cool idea on the other hand. Post. Thats the only rule for contributions.
Why is it like this? Because we want to foster a friendly attitude. Design IS communication and communication is hingent on a community. By letting everyone feel like they can contribute with mockups and cool ideas - we get more cool ideas. By playing with the art or design school rules of criticism we make certain that the other nasty and sadly common thing in design is minimized - the nonsense put-downs to make yourself seem "better" or the simple "you suck" comments that does nothing at all for design work.

3) This is a massive social experiment.
Yeah. It is. It's the tricky bit in what I do. On the one hand the goal is to create a stunning visual design for Plasma Next, on the other the plan is to create a community of designers and make design a "thing" within Plasma and KDE and Open Source in general.
I want to change the way we look at people and divide them into experts and "everyone else". I want to tear down those barriers and makes us all feel included, like we're a part (like I felt on the first sprint I was at). I want to change the way we handle design and this work is a test for that.
Why is it like that? Because I am old enough to know that failing is only really good when you fail miserably (that's when you learn things) and that sometimes you got to aim for the moon and skip the tree tops.
When I got into this I talked to some of the other designers who had worked on KDE projects and many of them where more or less burned out. They had worked themselves to the bone and then crashed due to it. I didn't feel like being another one AND I wanted to fix the issue permanently. So I went for the higher goal of it aware that it would mean more work for me personally and a higher risk of failing.

...
Not only so that we all started talking about design more constructively. Not just so everyone felt they could comment and be a part. Not only that there where hundreds of designers where yesterday there where one or two. Not only that.

But so that in the future there would be a model a system in which design would be created without the need for a petty expert-dictator who's presence was ever needed for the work to go forward. Where the load was shared by all. Where the work was more play than backbraking labour.

Maybe it will succeed, maybe it will fail - but if it does fail I think we can all agree that it will be a majestic catastrophe of a failure ;)

Next time I'll talk about the Design Vision (it won't be boring promise) what it is, how we intend to stick to it and where we are now in terms of design guidelines.

Categories: FLOSS Project Planets

Matt Raible: Spring Break in Florida: Golf, Beaches and Boats!

Planet Apache - Mon, 2014-04-14 22:41

Florida is a beautiful state, with sandy beaches, excellent fishing, fun people and a great enthusiasm for golf. When I dreamed up Trish, I knew she'd have an awesome family, but I never expected her parents to have a house on a golf course. Trish's grandma, Claire Stanley, is a legend in her own right. I've never met her, but I knew I loved her when Trish's dad first told me about her "layered shots". When I saw Claire's name listed several times on the walls of the The Country Club of Naples, a deep respect came over me. Claire picked out her house on the 17th hole (Trish's favorite number) of the Country Club in 1966, when the establishment was founded. My bus was born in 1966.

Today, Trish's parents have turned it into a golf and relaxation oasis, complete with beautiful orchid gardens, a sweet pool and Japanese decorations from the country where they first met.

As a golf enthusiast, I'm embarrassed that we've only visited her parents in Naples once before. We bought Abbie and Jack golf clubs last year for Jack's birthday. To fix our lack of visiting family in Florida, we took our kids and headed on a golf vacation for this year's Spring Break. We packed up our clubs, our swimsuits, lots of sunblock and headed for Naples a couple weeks ago.

We learned a technique for kids golfing in September 2012: have them tee off from the 150 marker so they have a chance to par the hole. We arrived on a Saturday, had a nice family brunch on Sunday and played golf with the kids that afternoon. The kids had a blast, even beating us on a couple holes.

The sunset on Sunday evening was gorgeous, especially with the kids frolicking in the waves.

On Monday, we woke up early and boarded the Key West Express before sunrise.

We arrived in Key West around noon and hung out by our hotel pool for most of the day. That evening, after marveling at the sunset, we took the kids on a Key West Ghost Tour. It was a bit long at 1.5 hours, but the story of Robert the Doll was excellent.

Tuesday morning, we met up with Abbie and Jack's Mom, Julie, and her fiancé Dave. They had Dave's kids with them, and we all headed out for a sailing, parasailing and jet skiing adventure together. Everyone had a great time, as is evidenced by the photos below.

Trish and I handed the kids over to Julie that afternoon and spent a few more hours in Key West. We rented bikes, visited the southernmost point in the US and enjoyed a cocktail on the beach. We especially liked seeing the house that Trish's Mom, Maureen, grew up in.

Around happy hour, after visiting the World of Beer, we hopped back on the Key West Express and rode back to Trish's parents.

The rest of the week was an ideal vacation: golfing, swimming in the pool, swimming in the ocean, walking on the beach, dinner on the beach and fishing in the Everglades. Trish's Dad, Joe, booked us a fishing trip in the Everglades, starting from Everglades City. He and I had to wake up before dawn on Friday to meet our guide, who was waiting for us at the dock at 7:30. From there, we spent six hours boating, casting, reeling, catching some, losing some and having a fabulous time. Our catch at the end of the adventure was two Trout and one Flounder (caught by our guide, Tony). Nevertheless, we had a delightful time and savored the sun and the water in a beautiful location.

Saturday, we played some more golf and took an evening flight home. It was a fun spring break and I'm glad the kids got to enjoy it with both parents and multiple grandparents. More than anything, thanks to Trish's parents for their hospitality and fun-loving lifestyle. We had a blast!

For more pictures, checkout Trish's Spring Break 2014 or my Spring Break in Florida.

Categories: FLOSS Project Planets

Bryan Pendleton: My trip to Korea: Bukchon and Namsangol

Planet Apache - Mon, 2014-04-14 21:02

One of the highlights of my visit to Seoul was seeing Bukchon and Namsangol.

On my first full day in Seoul, we went to Bukchon Hanok Village. This is a neighborhood in the Gwanghwamun area between the Gyeongbokgung Palace and the Changdeokgung Palace. Specifically, we visited the cluster of Hanoks that lies along Bukchon-ro 11-gil, which is reached by walking up Bukchon-ro from the Anguk subway station.

A Hanok is a traditional Korean wooden house.

The Hanoks in Bukchon are still actively occupied by residents, so what's really happening here is that you find yourself walking down quiet residential streets, admiring the beautiful houses on either side.

There is at least one museum and at least one cultural center in Bukchon. We briefly poked our heads inside the museum, which was nice, but we were in a hurry, sadly.

There are also a few Hanoks in Bukchon which are now guest houses, cafes, wine bars, and the like. They seemed tasteful enough to me, but I guess this increasing commercialism is controversial.

Bukchon is near some very nice areas, including Insa-Dong and Gwanghwamun Square, and generally is a really delightful place for an afternoon stroll. If you get up near the heights, the views are very good, too.

Later in my visit, near the end of it in fact, we stopped by another Hanok village, Namsangol.

Namsangol is different than Bukchon, as it is a collection of existing Hanoks from elsewhere in Seoul that were all moved to this one location, and were carefully restored within a park-like setting that is intended to be historically representative.

That is, Namsangol is a museum, not a place where people are actually living.

But, that being said, Namsangol is gorgeous! The park is very nicely landscaped, with a peaceful (man-made) creek running through it, and the buildings are all arranged elegantly, with nice sight lines to set them off and emphasize their architectural qualities.

There are lots of informative placques which explain what you are seeing, and for the most part you can wander all about and really get a good look at the buildings.

If time is short, and you can only pick one of the two villages, you can't really go wrong. Bukchon is more authentic, but Namsangol is prettier and more educational.

Instead of worrying about which village to pick, I'd suggest visiting whichever one you happen to be near:

  • If you happen to be visiting one of the palaces of Gyeongbokgung or Changdeokgung, then combine that with a visit to Bukchon, which is right next to either palace.
  • If you happen to be visiting Mt Namsan, or N Seoul Tower, or the Myeong-dong or Chungmuro areas, then combine that with a visit to Namsangol, which is close nearby.

Either way, you'll much enjoy your visit.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: BH release 1.54.0-2

Planet Debian - Mon, 2014-04-14 20:47
Yesterday's release of RcppBDT 0.2.3 lead to an odd build error. If one used at the same time a 32-bit OS, a compiler as recent as g++ 4.7 and the Boost 1.54.0 headers (directly or via the BH package) then the file lexical_cast.hpp barked and failed to compile for lack of an 128-bit integer (which is not a surprise on a 32-bit OS).

After looking at this for a bit, and looking at some related bug report, I came up with a simple fix (which I mentioned in an update to the RcppBDT 0.2.3 release post). Sleeping over it, and comparing to the Boost 1.55 file, showed that the hunch was right, and I have since made a new release 1.54.0-2 of the BH package which contains the fix.

Changes in version 1.54.0-2 (2014-04-14)
  • Bug fix to lexical_cast.hpp which now uses the test for INT128 which the rest of Boost uses, consistent with Boost 1.55 too.

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Colin Watson: Porting GHC: A Tale of Two Architectures

Planet Debian - Mon, 2014-04-14 20:45

We had some requests to get GHC (the Glasgow Haskell Compiler) up and running on two new Ubuntu architectures: arm64, added in 13.10, and ppc64el, added in 14.04. This has been something of a saga, and has involved rather more late-night hacking than is probably good for me.

Book the First: Recalled to a life of strange build systems

You might not know it from the sheer bulk of uploads I do sometimes, but I actually don't speak a word of Haskell and it's not very high up my list of things to learn. But I am a pretty experienced build engineer, and I enjoy porting things to new architectures: I'm firmly of the belief that breadth of architecture support is a good way to shake out certain categories of issues in code, that it's worth doing aggressively across an entire distribution, and that, even if you don't think you need something now, new requirements have a habit of coming along when you least expect them and you might as well be prepared in advance. Furthermore, it annoys me when we have excessive noise in our build failure and proposed-migration output and I often put bits and pieces of spare time into gardening miscellaneous problems there, and at one point there was a lot of Haskell stuff on the list and it got a bit annoying to have to keep sending patches rather than just fixing things myself, and ... well, I ended up as probably the only non-Haskell-programmer on the Debian Haskell team and found myself fixing problems there in my free time. Life is a bit weird sometimes.

Bootstrapping packages on a new architecture is a bit of a black art that only a fairly small number of relatively bitter and twisted people know very much about. Doing it in Ubuntu is specifically painful because we've always forbidden direct binary uploads: all binaries have to come from a build daemon. Compilers in particular often tend to be written in the language they compile, and it's not uncommon for them to build-depend on themselves: that is, you need a previous version of the compiler to build the compiler, stretching back to the dawn of time where somebody put things together with a big magnet or something. So how do you get started on a new architecture? Well, what we do in this case is we construct a binary somehow (usually involving cross-compilation) and insert it as a build-dependency for a proper build in Launchpad. The ability to do this is restricted to a small group of Canonical employees, partly because it's very easy to make mistakes and partly because things like the classic "Reflections on Trusting Trust" are in the backs of our minds somewhere. We have an iron rule for our own sanity that the injected build-dependencies must themselves have been built from the unmodified source package in Ubuntu, although there can be source modifications further back in the chain. Fortunately, we don't need to do this very often, but it does mean that as somebody who can do it I feel an obligation to try and unblock other people where I can.

As far as constructing those build-dependencies goes, sometimes we look for binaries built by other distributions (particularly Debian), and that's pretty straightforward. In this case, though, these two architectures are pretty new and the Debian ports are only just getting going, and as far as I can tell none of the other distributions with active arm64 or ppc64el ports (or trivial name variants) has got as far as porting GHC yet. Well, OK. This was somewhere around the Christmas holidays and I had some time. Muggins here cracks his knuckles and decides to have a go at bootstrapping it from scratch. It can't be that hard, right? Not to mention that it was a blocker for over 600 entries on that build failure list I mentioned, which is definitely enough to make me sit up and take notice; we'd even had the odd customer request for it.

Several attempts later and I was starting to doubt my sanity, not least for trying in the first place. We ship GHC 7.6, and upgrading to 7.8 is not a project I'd like to tackle until the much more experienced Haskell folks in Debian have switched to it in unstable. The porting documentation for 7.6 has bitrotted more or less beyond usability, and the corresponding documentation for 7.8 really isn't backportable to 7.6. I tried building 7.8 for ppc64el anyway, picking that on the basis that we had quicker hardware for it and didn't seem likely to be particularly more arduous than arm64 (ho ho), and I even got to the point of having a cross-built stage2 compiler (stage1, in the cross-building case, is a GHC binary that runs on your starting architecture and generates code for your target architecture) that I could copy over to a ppc64el box and try to use as the base for a fully-native build, but it segfaulted incomprehensibly just after spawning any child process. Compilers tend to do rather a lot, especially when they're built to use GCC to generate object code, so this was a pretty serious problem, and it resisted analysis. I poked at it for a while but didn't get anywhere, and I had other things to do so declared it a write-off and gave up.

Book the Second: The golden thread of progress

In March, another mailing list conversation prodded me into finding a blog entry by Karel Gardas on building GHC for arm64. This was enough to be worth another look, and indeed it turned out that (with some help from Karel in private mail) I was able to cross-build a compiler that actually worked and could be used to run a fully-native build that also worked. Of course this was 7.8, since as I mentioned cross-building 7.6 is unrealistically difficult unless you're considerably more of an expert on GHC's labyrinthine build system than I am. OK, no problem, right? Getting a GHC at all is the hard bit, and 7.8 must be at least as capable as 7.6, so it should be able to build 7.6 easily enough ...

Not so much. What I'd missed here was that compiler engineers generally only care very much about building the compiler with older versions of itself, and if the language in question has any kind of deprecation cycle then the compiler itself is likely to be behind on various things compared to more typical code since it has to be buildable with older versions. This means that the removal of some deprecated interfaces from 7.8 posed a problem, as did some changes in certain primops that had gained an associated compatibility layer in 7.8 but nobody had gone back to put the corresponding compatibility layer into 7.6. GHC supports running Haskell code through the C preprocessor, and there's a __GLASGOW_HASKELL__ definition with the compiler's version number, so this was just a slog tracking down changes in git and adding #ifdef-guarded code that coped with the newer compiler (remembering that stage1 will be built with 7.8 and stage2 with stage1, i.e. 7.6, from the same source tree). More inscrutably, GHC has its own packaging system called Cabal which is also used by the compiler build process to determine which subpackages to build and how to link them against each other, and some crucial subpackages weren't being built: it looked like it was stuck on picking versions from "stage0" (i.e. the initial compiler used as an input to the whole process) when it should have been building its own. Eventually I figured out that this was because GHC's use of its packaging system hadn't anticipated this case, and was selecting the higher version of the ghc package itself from stage0 rather than the version it was about to build for itself, and thus never actually tried to build most of the compiler. Editing ghc_stage1_DEPS in ghc/stage1/package-data.mk after its initial generation sorted this out. One late night building round and round in circles for a while until I had something stable, and a Debian source upload to add basic support for the architecture name (and other changes which were a bit over the top in retrospect: I didn't need to touch the embedded copy of libffi, as we build with the system one), and I was able to feed this all into Launchpad and watch the builders munch away very satisfyingly at the Haskell library stack for a while.

This was all interesting, and finally all that work was actually paying off in terms of getting to watch a slew of several hundred build failures vanish from arm64 (the final count was something like 640, I think). The fly in the ointment was that ppc64el was still blocked, as the problem there wasn't building 7.6, it was getting a working 7.8. But now I really did have other much more urgent things to do, so I figured I just wouldn't get to this by release time and stuck it on the figurative shelf.

Book the Third: The track of a bug

Then, last Friday, I cleared out my urgent pile and thought I'd have another quick look. (I get a bit obsessive about things like this that smell of "interesting intellectual puzzle".) slyfox on the #ghc IRC channel gave me some general debugging advice and, particularly usefully, a reduced example program that I could use to debug just the process-spawning problem without having to wade through noise from running the rest of the compiler. I reproduced the same problem there, and then found that the program crashed earlier (in stg_ap_0_fast, part of the run-time system) if I compiled it with +RTS -Da -RTS. I nailed it down to a small enough region of assembly that I could see all of the assembly, the source code, and an intermediate representation or two from the compiler, and then started meditating on what makes ppc64el special.

You see, the vast majority of porting bugs come down to what I might call gross properties of the architecture. You have things like whether it's 32-bit or 64-bit, big-endian or little-endian, whether char is signed or unsigned, that sort of thing. There's a big table on the Debian wiki that handily summarises most of the important ones. Sometimes you have to deal with distribution-specific things like whether GL or GLES is used; often, especially for new variants of existing architectures, you have to cope with foolish configure scripts that think they can guess certain things from the architecture name and get it wrong (assuming that powerpc* means big-endian, for instance). We often have to update config.guess and config.sub, and on ppc64el we have the additional hassle of updating libtool macros too. But I've done a lot of this stuff and I'd accounted for everything I could think of. ppc64el is actually a lot like amd64 in terms of many of these porting-relevant properties, and not even that far off arm64 which I'd just successfully ported GHC to, so I couldn't be dealing with anything particularly obvious. There was some hand-written assembly which certainly could have been problematic, but I'd carefully checked that this wasn't being used by the "unregisterised" (no specialised machine dependencies, so relatively easy to port but not well-optimised) build I was using. A problem around spawning processes suggested a problem with SIGCHLD handling, but I ruled that out by slowing down the first child process that it spawned and using strace to confirm that SIGSEGV was the first signal received. What on earth was the problem?

From some painstaking gdb work, one thing I eventually noticed was that stg_ap_0_fast's local stack appeared to be being corrupted by a function call, specifically a call to the colourfully-named debugBelch. Now, when IBM's toolchain engineers were putting together ppc64el based on ppc64, they took the opportunity to fix a number of problems with their ABI: there's an OpenJDK bug with a handy list of references. One of the things I noticed there was that there were some stack allocation optimisations in the new ABI, which affected functions that don't call any vararg functions and don't call any functions that take enough parameters that some of them have to be passed on the stack rather than in registers. debugBelch takes varargs: hmm. Now, the calling code isn't quite in C as such, but in a related dialect called "Cmm", a variant of C-- (yes, minus), that GHC uses to help bridge the gap between the functional world and its code generation, and which is compiled down to C by GHC. When importing C functions into Cmm, GHC generates prototypes for them, but it doesn't do enough parsing to work out the true prototype; instead, they all just get something like extern StgFunPtr f(void);. In most architectures you can get away with this, because the arguments get passed in the usual calling convention anyway and it all works out, but on ppc64el this means that the caller doesn't generate enough stack space and then the callee tries to save its varargs onto the stack in an area that in fact belongs to the caller, and suddenly everything goes south. Things were starting to make sense.

Now, debugBelch is only used in optional debugging code; but runInteractiveProcess (the function associated with the initial round of failures) takes no fewer than twelve arguments, plenty to force some of them onto the stack. I poked around the GCC patch for this ABI change a bit and determined that it only optimised away the stack allocation if it had a full prototype for all the callees, so I guessed that changing those prototypes to extern StgFunPtr f(); might work: it's still technically wrong, not least because omitting the parameter list is an obsolescent feature in C11, but it's at least just omitting information about the parameter list rather than actively lying about it. I tweaked that and ran the cross-build from scratch again. Lo and behold, suddenly I had a working compiler, and I could go through the same build-7.6-using-7.8 procedure as with arm64, much more quickly this time now that I knew what I was doing. One upstream bug, one Debian upload, and several bootstrapping builds later, and GHC was up and running on another architecture in Launchpad. Success!

Epilogue

There's still more to do. I gather there may be a Google Summer of Code project in Linaro to write proper native code generation for GHC on arm64: this would make things a good deal faster, but also enable GHCi (the interpreter) and Template Haskell, and thus clear quite a few more build failures. Since there's already native code generation for ppc64 in GHC, getting it going for ppc64el would probably only be a couple of days' work at this point. But these are niceties by comparison, and I'm more than happy with what I got working for 14.04.

The upshot of all of this is that I may be the first non-Haskell-programmer to ever port GHC to two entirely new architectures. I'm not sure if I gain much from that personally aside from a lot of lost sleep and being considered extremely strange. It has, however, been by far the most challenging set of packages I've ported, and a fascinating trip through some odd corners of build systems and undefined behaviour that I don't normally need to touch.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-04-14

Planet Apache - Mon, 2014-04-14 18:58
  • Cloudflare demonstrate Heartbleed key extraction

    from nginx. ‘Based on the findings, we recommend everyone reissue + revoke their private keys.’

    (tags: security nginx heartbleed ssl tls exploits private-keys)

  • When two-factor authentication is not enough

    Fastmail.FM nearly had their domain stolen through an attack exploiting missing 2FA authentication in Gandi.

    An important lesson learned is that just because a provider has a checkbox labelled “2 factor authentication” in their feature list, the two factors may not be protecting everything – and they may not even realise that fact themselves. Security risks always come on the unexpected paths – the “off label” uses that you didn’t think about, and the subtle interaction of multiple features which are useful and correct in isolation.

    (tags: gandi 2fa fastmail authentication security mfa two-factor-authentication mail)

  • Of Money, Responsibility, and Pride

    Steve Marquess of the OpenSSL Foundation on their funding, and lack thereof:

    I stand in awe of their talent and dedication, that of Stephen Henson in particular. It takes nerves of steel to work for many years on hundreds of thousands of lines of very complex code, with every line of code you touch visible to the world, knowing that code is used by banks, firewalls, weapons systems, web sites, smart phones, industry, government, everywhere. Knowing that you’ll be ignored and unappreciated until something goes wrong. The combination of the personality to handle that kind of pressure with the relevant technical skills and experience to effectively work on such software is a rare commodity, and those who have it are likely to already be a valued, well-rewarded, and jealously guarded resource of some company or worthy cause. For those reasons OpenSSL will always be undermanned, but the present situation can and should be improved. There should be at least a half dozen full time OpenSSL team members, not just one, able to concentrate on the care and feeding of OpenSSL without having to hustle commercial work. If you’re a corporate or government decision maker in a position to do something about it, give it some thought. Please. I’m getting old and weary and I’d like to retire someday.

    (tags: funding open-source openssl heartbleed internet security money)

  • Huginn

    a system for building agents that perform automated tasks for you online. They can read the web, watch for events, and take actions on your behalf. Huginn’s Agents create and consume events, propagating them along a directed event flow graph. Think of it as Yahoo! Pipes plus IFTTT on your own server. You always know who has your data. You do. MIT-licensed open source, built on Rails.

    (tags: ifttt automation huginn ruby rails open-source agents)

Categories: FLOSS Project Planets

Andre Roberge: Reeborg knows multiple programming languages

Planet Python - Mon, 2014-04-14 18:31
I wish I were in Montreal to visit my daughter, eat some delicious Saint-Sauveur bagels for breakfast, a good La Banquise poutine and some Montreal Smoked Meat for lunch... and, of course, attend Pycon.  Alas....

In the meantime, a quick update: Reeborg now knows Python, Javascript and CoffeeScript.  The old tutorials are gone as Reeborg's World has seen too many changes.  I now am in the process of writing the following tutorials, all using Reeborg's world as the test environment

  1. A quick introduction to Python (for people that know programming in another language)
  2. A quick introduction to Javascript (same as above)
  3. A quick introduction to CoffeeScript (same as above)
  4. An introduction to programming using Python, for absolute beginners
  5. An introduction to programming using Javascript, for absolute beginners
  6. An introduction to Object-Oriented Programming concepts using Python
  7. An introduction to Object-Oriented Programming concepts using Javascript
Note that I have two "versions" of Javascript, one that uses JSHint to enforce good programming practices (and runs the code with "use strict"; option) and one that is the normal permissive Javascript.
If anyone knows of any other transpilers written in Javascript that can convert code client-side from language X into Javascript (like Brython does for Python, or CoffeeScript does naturally), I would be interested in adding them as additional options.
Categories: FLOSS Project Planets

Richard Hartmann: git-annex corner case: Changing commit messages retroactively and after syncing

Planet Debian - Mon, 2014-04-14 17:12

This is half a blog post and half a reminder for my future self.

So let's say you used the following commands:

git add foo git annex add bar git annex sync # move to different location with different remotes available git add quux git annex add quuux git annex sync

what I wanted to happen was to simply sync the already committed stuff to the other remotes. What happened instead was git annex sync's automagic commit feature (which you can not disable, it seems) doing its job: Commit what was added earlier and use "git-annex automatic sync" as commit message.

This is not a problem in and as of itself, but as this is my my master annex and as I managed to maintain clean commit messages for the last few years, I felt the need to clean this mess up.

Changing old commit messages is easy:

git rebase --interactive HEAD~3

pick the r option for "reword" and amend the two commit messages. I did the same on my remote and all the branches I could find with git branch -a. Problem is, git-annex pulls in changes from refs which are not shown as branches; run git annex sync and back are the old commits along with a merge commit like an ugly cherry on top. Blegh.

I decided to leave my comfort zone and ended up with the following:

# always back up before poking refs git clone --mirror repo backup git reset --hard 1234 git show-ref | grep master # for every ref returned, do: git update-ref $ref 1234

rinse repeat for every remote, git annex sync, et voilà. And yes, I avoided using an actual loop on purpose; sometimes, doing things slowly and by hand just feels safer.

For good measure, I am running

git fsck && git annex fsck

on all my remotes now, but everything looks good up to now.

Categories: FLOSS Project Planets

Calling all Testers

Planet KDE - Mon, 2014-04-14 17:06
KDE Project:

Candidate images for Kubuntu 14.04LTS are up and need you to test them. Go to the ISO tracking site to download and mark your testing status. Check out the milestoned bugs for issues we are aware of and do report any we are not.

Categories: FLOSS Project Planets

wdiff @ Savannah: wdiff 1.2.2 released

GNU Planet! - Mon, 2014-04-14 15:31

I'm happy to announce the release of wdiff 1.2.2.

Over a year after ist predecessor, this release updates the build system. One may hope that this will help building wdiff on more recent architectures.

The translations for Vietnamese, Swedish, Estonian, Chinese (traditional), Brazilian Portuguese and Russian were updated as well. Thanks again to our translators!

There were no modifications to the core code of the application.

Categories: FLOSS Project Planets

Forum One: Big on Drupal in the Big Apple – NYC Camp 2014

Planet Drupal - Mon, 2014-04-14 15:27

We’re back from another successful Drupal NYC Camp!

A great event as always (thanks to Forest Mars and all the other volunteers and organizers that made the event possible), this year the event attracted more than 800 attendees and was held at a truly awesome venue: the United Nations.

Forum One’s presence this year was bigger than ever! Five of us attended, four of whom spoke at five different sessions covering a variety of topics:
Keenan Holloway talked to a packed room with his tongue-twisting, alliteratively-titled session: Paraphrasing Panels, Panelizer and Panopoly;
Chaz Chumley showed his extensive knowledge of the upcoming Drupal 8 Theming system – look for his book on the same topic later this year;
Michaela Hackner joined forces with Chaz Chumley to highlight some of our recent work with the American Red Cross on the Global Disaster Preparedness Center in a session called Designing for Disasters: An Iterative Progression Towards a More Prepared World;
• and William Hurley (that’s me!) was honored to have the opportunity to talk about Building Interactive Web Applications with Drupal, as well as Developing Locally with Virtual Machines at the DevOps summit (this latter one, sadly, wasn’t recorded due to some technical difficulties). I was blown away by the attendance at both of my sessions and was honored to be able to share some of our challenges and solutions at each.

These camps aren’t solely about sessions, of course. While not all of us were able to stay the whole weekend, Kalpana Goel stayed through Monday to work on some of the Drupal 8 sprints that were going on.

We love the opportunity to give back to the community in as many ways as possible, in code contributions to Drupal 8, contributed modules and also attending and speaking whenever possible. If you appreciate our expertise and would like us to speak at an event, drop us a line at marketing (at) forumone (dot) com and we’ll be happy to participate!

Back from the Big Apple, we highlight our participation in the 2014 NYC Drupal Camp and share the recordings of the five sessions that our team rocked at the event, which was held at the United Nations building in New York City.

Categories: FLOSS Project Planets

Drupal Association News: Submit Your Design Proposals for DrupalCon Latin America!

Planet Drupal - Mon, 2014-04-14 14:19

Though DrupalCon Latin America - Bogotá, Columbia is just under a year away, we’re already getting the ball rolling on planning and organization— and we need your help!

Categories: FLOSS Project Planets

ImageX Media: An inheritable install profile architecture for Drupal

Planet Drupal - Mon, 2014-04-14 13:55

Drupal core comes with a built-in structure called an installation profile. An install profile is a specific set of features and configurations that get built when the site is installed. Drupal has almost always had some variety of install profile, but with Drupal 7 they became a whole lot easier to create and understand.

Categories: FLOSS Project Planets

Monday Report #11 - Go time!

Planet KDE - Mon, 2014-04-14 13:11

In which we talk about widget theme, community participation, ask for help, show off work by two handsome devs and mention some future promo work to come!


KDE VDG group member, hard at work!
This time I will focus primarily on the widget theme. Now as some of you may now there is a "quiet area" where we keep some of the work that either needs to be secret or where there are some issue or you want to test things or you need to work on it in quiet in a small group.

This isn't the end plan - the idea is that in the end everything except sensitive things (where perhaps the dev has asked us not to tell others about it yet) will be done in the open. Now is the time to try that out for reals!

...
Andrew has been hard at work with the widget theme - now unlike before it doesn't demand that you know C++ just that you can handle QML. Now what does that crazy abbreviation mean? Well, QML is "Qt Meta Language" and it's the way we can among other things style Qt apps or widgets.
It is comparatively simple to use and learn. I say comparatively because I'm trying to learn it as we speak and I would be lying if I said it was a dance on roses BUT it's way simpler than any other method for styling AND it offers a ton of features and possibilities.
Andrew is well under way with it and the current iteration - as well as a sneak peak at the window theme - can be seen here:


With a sneak peak at window dialogues too.

But he needs the communities help refining it! In his post in the forum is a short recap of the issue and some instructions as well as an invitation to EVERYONE no matter what skill level (or indeed if the suggestions are done in words or mockups or QML) to participate. I can only suggest that you do! The more we are, the better it will become!

...
Aside from that some rather fascinating possibilities have opened up. Alex Fiestas and Vishesh Handa, two of what I prefer to call "KDE's finest" have started working on a new Video Player.
Now many might think that this is a waste of time as there already are video players out there - then let me let you in on a little secret. Inside this thick skull of mine is a dream of crafting applications made for desktop usage. Where we take a sincere look at whats needed, how it can best be presented, how it should work and flow - without feeling stuck in the hellish "where did my X/Y/Z feature go in X/Y/Z software?" problem. When you remodel something existing you run the risk of ruining it. It's a simple fact - it also ties you down design wise because we're nice people (trust me, designers are not only "nice" we are also "people") and to storm in and tell someone who doesn't want to change their application that you're there to do just that isn't a great experience.

...
Also this week hopefully a small promo video will be cut together for one detail or two of Plasma Next. Nothing long or fancy, no wonderful great secrets revealed BUT something to set the tone.

So this was perhaps not the longest monday report - but it was hopefully pleasant to read and informative! Until next time! (Oh and remember my promise "A year and a day for KDE"? 20% of the time has now passed...
(EDIT: I forgot to add the second image from Andrews post, sry fixed now)

Categories: FLOSS Project Planets

Frederick Giasson: Installing OSF for Drupal using the OSF Installer (Screencast)

Planet Drupal - Mon, 2014-04-14 13:01

The Open Semantic Framework (OSF) for Drupal is a middleware layer that allows structured data (RDF) and associated vocabularies (ontologies) to “drive” tailored tools and data displays within Drupal. The basic OSF for Drupal modules provide two types of capabilities. First, there are a series of connector modules such as OSF Entities, OSF SearchAPI and OSF Field Storage to integrate an OSF instance into Drupal’s core APIs. Second, there is a series of module tools used to administer all of these capabilities.

By using OSF for Drupal, you may create, read, update and delete any kind of content in a OSF instance. You may also search, browse, import and export structured datasets from an OSF instance.

OSF for Drupal connects to the underlying structured (RDF) data via the separately available open-source OSF Web Services. OSF Web Services is a mostly RESTful Web services layer that allows standalone or multiple Drupal installations to share and collaborate structured data with one another via user access rights and privileges to registered datasets. Collaboration networks may be established directly to distributed OSF Web Services servers, also allowing non-Drupal installations to participate in the network.

OSF for Drupal can also act as a linked data platform. With Drupal’s other emerging RDF capabilities, content generated by Drupal can be ingested by the OSF Web Services and managed via the OSF for Drupal tools, including the publication and exposure on the Web of linked data with query and Web service endpoints.

OSF for Drupal has dependencies on OSF Web Services, which means an operational OSF for Drupal website only requires access to a fully operational OSF instance. For instance, you can check the Installing Core OSF (Open Semantic Framework) screencast to see how you can deploy your own OSF Web Services instance.

Installing OSF for Drupal using the OSF Installer

In this screencast, we will cover how to install OSF for Drupal using the OSF Installer command line tool.

Categories: FLOSS Project Planets

Daniel Kahn Gillmor: OTR key replacement (heartbleed)

Planet Debian - Mon, 2014-04-14 12:45
I'm replacing my OTR key for XMPP because of heartbleed (see below).

If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.

-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 OTR Key Replacement for XMPP dkg@jabber.org =========================================== Date: 2014-04-14 My main XMPP account is dkg@jabber.org. I prefer OTR [0] conversations when using XMPP for private discussions. I was using irssi to connect to XMPP servers, and irssi relies on OpenSSL for the TLS connections. I was using it with versions of OpenSSL that were vulnerable to the "Heartbleed" attack [1]. It's possible that my OTR long-term secret key was leaked via this attack. As a result, I'm changing my OTR key for this account. The new, correct OTR fingerprint for the XMPP account at dkg@jabber.org is: F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D Thanks for taking the time to verify your peers' fingerprints. Secure communication is important not only to protect yourself, but also to protect your friends, their friends and so on. Happy Hacking, --dkg (Daniel Kahn Gillmor) Notes: [0] OTR: https://otr.cypherpunks.ca/ [1] Heartbleed: http://heartbleed.com/ -----BEGIN PGP SIGNATURE----- Version: GnuPG v1 iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd 2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4 GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk 3UsXFdVZGfE9rfCOLf0F =BGa1 -----END PGP SIGNATURE-----
Categories: FLOSS Project Planets

AGLOBALWAY: Mobile First?

Planet Drupal - Mon, 2014-04-14 11:57
Much has been said over the last number of years since the publication of Luke Wroblewski’s Mobile First in 2011, as part of the A Book Apart series, marked as “brief books for people who make websites.”  The series offers valuable tools about designing for and working in the web business, and Luke’s contribution is no small one.   And while a few years have come and gone, has anything really changed? I don’t think so. But perhaps some clarification of terms is in order.    One of the hallmarks of “mobile first” is asking tough questions about what we actually put on the page. For example, if we determine that something is not necessary for the mobile experience of a website, it can be worth calling into question whether it is valuable for the “full desktop experience” as well.    Given the restrictions of the viewport on mobile devices, it makes perfect sense to limit the things that can take away from a quality experience of your website. Ideally, a user’s focus would be on the content, which (hopefully) is the reason to be on your site in the first place. So let’s get rid of everything else!   Behold the pendulum swinging, babies thrown out with the bathwater.   While nobody would deny the increase in the use of mobile devices, desktop browsers are still king of the hill when it comes to how people access the internet. Given the numbers (a quick Google search will give you a general idea), it is understandable that people get scared that by eliminating things from the mobile experience of your site, we may be getting rid of too much. And indeed, there have no doubt been many cases of this happening.   Mobile first, not mobile only.   What needs bearing in mind, however, is the idea of designing for mobile first. I’m sure Mr. Wroblewski reflected on the terms carefully, deciding not to title his book Designing for Mobile, as though it were a separate thing - indeed, if it is separate, we now know it ought not be. Thankfully, he had the foresight to be able to craft the right message, even if it fell on a few deaf ears.   More and more, mobile users area demanding a complete experience to be possible for them as well. This was certainly to be expected. Should we really be assuming that mobile users are necessarily “on the go” and therefore should not expect what they might experience on a desktop?  We all know what they say about making assumptions…   There are many, many challenges when it comes to building responsive websites, and I believe that designing for the mobile experience is chief among them. Not a small part of which is understanding the technical implications of such designs - this is a certainly justification for placing the mobile experience “first” in the design stage. And yet, rather than being limited by screen size in designing for mobile, we actually have an opportunity to take advantage of the power of the device. Perhaps the mobile experience could even be a superior one because of its capabilities.   So should we still be designing for mobile first? Yes - so long as it remains part of an holistic overall design for the user experience. I’m sure Luke would agree. Tags: Mobiledrupal planet
Categories: FLOSS Project Planets

Mike Driscoll: Miss PyCon 2014? Watch the Replay!

Planet Python - Mon, 2014-04-14 11:45

If you’re like me, you missed PyCon North America 2014 this year. It happened last weekend. While the main conference days are over, the code sprints are still running. Anyway, for those of you who missed PyCon, they have released a bunch of videos on pyvideo! Every year, they seem to get the videos out faster than the last. I think that’s pretty awesome myself. I’m looking forward to watching a few of these so I can see what I missed.

Categories: FLOSS Project Planets

GNU MediaGoblin: Almost there! Campaign ends this Friday, and we’re close!

GNU Planet! - Mon, 2014-04-14 11:45

Whew! We’re in the midst of the last week of the MediaGoblin campaign! As you may already know, we already beat our first milestone. This means we’ve unlocked the most core and exciting things: federation and 1.0 support. But let’s face it, some of the most exciting things happen in the second milestone:

So let’s face it… the really exciting stuff happens once we hit 60k. But how do we get there by the end of the week? That doesn’t seem like much time!

Well… good news everyone! We’re a lot closer than we look! You may remember that we have a 10k matching grant which kicks in when we hit 46k… and we’re already well over halfway through to meeting the matching goal. That means that as soon as we hit 46k, this magic happens:

And once we’re at 56k, that’s only 4k away from our goal. So close! So if you haven’t donated yet, now’s a great time to do so!

One more thing. We realize that if we hit 60k right at the end on Friday, that doesn’t give people much time to take advantage of the “premium hosting” reward. Because of that, we’ll be opening up the premium hosting option (but only the premium hosting option) after the campaign ends… more details will be announced later. If we hit 60k by Friday, that is. :)

We can do it, right? Let’s do this!

Categories: FLOSS Project Planets

Fred Parke | The Web Developer: Creating content types and fields using a custom module in Drupal 7

Planet Drupal - Mon, 2014-04-14 11:44

I was writing a custom module recently which used a custom content type or two. I wanted to make the module as reusable as possible but I also wanted to avoid including a feature inside of the module to add these content types.

Categories: FLOSS Project Planets
Syndicate content