Planet Debian

Syndicate content
Planet Debian -
Updated: 1 day 22 hours ago

Francesca Ciceri: A crafty roundup: projects from the last months

Tue, 2014-03-25 10:25

A quick roundup of things I made during the last months, following tutorials here and there.

1: a soft bunny more or less based on this one

2: skelly man! You can find pattern and instructions on Chez Beeper Bebe's blog

3: bibs: the embroidered one is based on this pattern by Charlotte Lyons on SouleMama's blog, the other re-uses the same pattern but add a bird applique instead.

1, 4: my first try with the beautiful Weekender Bag ended with something else entirely: apparently, where euclidean geometry principles apply, cutting a piece of fabric 10 cm smaller will result in a bag 10 cm smaller!
Go figure! Fhtang!

I'm happy anyway with the final result, but it resembles more a tote bag than a weekender one. I then made also a successful Weekender, but haven't taken a pic yet. If you try that pattern, you may find this blogpost helpful.

2: This is my try at copycatting this model without a pattern and didn't end very well.
However, a couple lessons learned:

  • you don't improvise armholes and sleeves: you have to carefully plan them. I know this now, but at the time I was just a kid with a crazy dream ;)
  • a-line dresses with straight necklines are definitely not meant for my body type. no matter how cute they look on someone else.
  • I really should hack my mannequin, so to have it of my actual size. Or I will continue sew clothes fitting it, and not me.

3: Blouse with knit-stretchy stuff! (You love me when I speak technical, don't you?)
I did that without pattern, mostly using a similar top I have as inspiration/guide

Categories: FLOSS Project Planets

Richard Hartmann: Train them to submit

Tue, 2014-03-25 09:11

And today from our ever-growing collection of what the fuck is wrong with you people?!...

This is wrong on so many levels, I can't even begin to describe it. Sadly, it seems that this will get funded. And if it does not, technology will only become cheaper over time...

Categories: FLOSS Project Planets

Petter Reinholdtsen: Public Trusted Timestamping services for everyone

Tue, 2014-03-25 06:50

Did you ever need to store logs or other files in a way that would allow it to be used as evidence in court, and needed a way to demonstrate without reasonable doubt that the file had not been changed since it was created? Or, did you ever need to document that a given document was received at some point in time, like some archived document or the answer to an exam, and not changed after it was received? The problem in these settings is to remove the need to trust yourself and your computers, while still being able to prove that a file is the same as it was at some given time in the past.

A solution to these problems is to have a trusted third party "stamp" the document and verify that at some given time the document looked a given way. Such notarius service have been around for thousands of years, and its digital equivalent is called a trusted timestamping service. The Internet Engineering Task Force standardised how such service could work a few years ago as RFC 3161. The mechanism is simple. Create a hash of the file in question, send it to a trusted third party which add a time stamp to the hash and sign the result with its private key, and send back the signed hash + timestamp. Both email, FTP and HTTP can be used to request such signature, depending on what is provided by the service used. Anyone with the document and the signature can then verify that the document matches the signature by creating their own hash and checking the signature using the trusted third party public key. There are several commercial services around providing such timestamping. A quick search for "rfc 3161 service" pointed me to at least DigiStamp, Quo Vadis, Global Sign and Global Trust Finder. The system work as long as the private key of the trusted third party is not compromised.

But as far as I can tell, there are very few public trusted timestamp services available for everyone. I've been looking for one for a while now. But yesterday I found one over at Deutches Forschungsnetz mentioned in a blog by David Müller. I then found a good recipe on how to use the service over at the University of Greifswald.

The OpenSSL library contain both server and tools to use and set up your own signing service. See the ts(1SSL), tsget(1SSL) manual pages for more details. The following shell script demonstrate how to extract a signed timestamp for any file on the disk in a Debian environment:

#!/bin/sh set -e url="" caurl="" reqfile=$(mktemp -t tmp.XXXXXXXXXX.tsq) resfile=$(mktemp -t tmp.XXXXXXXXXX.tsr) cafile=chain.txt if [ ! -f $cafile ] ; then wget -O $cafile "$caurl" fi openssl ts -query -data "$1" -cert | tee "$reqfile" \ | /usr/lib/ssl/misc/tsget -h "$url" -o "$resfile" openssl ts -reply -in "$resfile" -text 1>&2 openssl ts -verify -data "$1" -in "$resfile" -CAfile "$cafile" 1>&2 base64 < "$resfile" rm "$reqfile" "$resfile"

The argument to the script is the file to timestamp, and the output is a base64 encoded version of the signature to STDOUT and details about the signature to STDERR. Note that due to a bug in the tsget script, you might need to modify the included script and remove the last line. Or just write your own HTTP uploader using curl. :) Now you too can prove and verify that files have not been changed.

But the Internet need more public trusted timestamp services. Perhaps something for Uninett or my work place the University of Oslo to set up?

Categories: FLOSS Project Planets

Russell Coker: Nexus5 Armourdillo Hybrid Case

Tue, 2014-03-25 00:42

I’ve just been given an Armourdillo Hybrid case for the Nexus 5 [1] to review. The above pictures show the back of the case, the front of the case, the stand, and the front of the case with the screen blank. When I first photographed the case the camera focused on a reflection of the window, I include that picture for amusement and to demonstrate how reflective the phone screen is.

This case is very hard, the green plastic is the soft inner layer which is still harder than the plastic in a typical “gel case”. The black part is polycarbonate which is very hard and also a little slippery. The case is designed with lots of bumps for grip (a little like the sole of a running shoe) so it’s not likely to slip out of your hand. But the polycarbonate slides easily on plastic surfaces such as the dash of a car. It’s fortunate that modern cars have lots of “cup holders” that can be used for holding a phone.

I haven’t dropped the phone since getting the new case, but I expect that the combination of a hard outer shell and a slightly softer inner shell (to cushion the impact) will protect it well. All the edges of the case extend above the screen so dropping the phone face down on a hard flat surface shouldn’t cause any damage.

The black part has a stand for propping the phone on it’s side to watch a movie. The stand is very solid and is in the ideal position for use on soft surfaces such as a doona or pillow for watching TV in bed.


This case is mostly designed to protect the phone and the bumps that are used for grip detract from the appearance IMHO. I think that the Ringke Fusion case for my Nexus 4 [2] looks much better, it’s a trade-off between appearance and functionality.

My main criteria for this case were good protection (better than a gel case) and small size (not one of the heavy waterproof cases). It was a bonus to get a green case for the Enlightened team in Ingress. NB Armourdillo also offers a blue case for the Resistance team in Ingress as well as other colors.

MobileZap also have a number of other cases for the Nexus 5 [3].

Related posts:

  1. Nexus 4 Ringke Fusion Case I’ve been using Android phones for 2.5 years and...
  2. Samsung Galaxy S3 First Review with Power Case My new Samsung Galaxy S3 arrived a couple of days...
  3. Nexus 4 My wife has had a LG Nexus 4 for about...
Categories: FLOSS Project Planets

Andrew Pollock: [life] Day 56, Kindergarten, rain, taxes

Mon, 2014-03-24 23:42

There was quite the torrential downpour overnight. It woke me up, and it woke Zoe up too, and at about 1:45am she ended up in bed with me. I think she managed to invent a new baby sleep position, where she was on my pillow, perpendicular to me along the bed head, and had somehow ejected the pillow that was on her side of the bed.

We got up, with a slow start, and the weather was still looking a bit dicey, so Zoe wanted to go to Kindergarten by car. That actually meant we were the first ones there, because I'd been working to a timetable of leaving home by bike.

One of the chickens was starting to hatch (and subsequently hatched around noon) so that was exciting. Funnily enough, Zoe had spent all morning telling me how she didn't want to go to Kindergarten, but by the time we got there, she didn't seem to mind being there all that much.

After I got home from Kindergarten, I biked down to Bulimba to go to the bank to finalise opening my business bank accounts. I've since discovered that one can't do much without an ABN, I can't even get a cheque book, so I've sicked my accountant on that one for me.

I got stuck into my US taxes some more today, and made a very satisfactory amount of progress on them. I think I should be able to finish them off in the next session I get to work on them.

I felt like getting out of the house after that. It was looking quite like rain again, making picking up Zoe by bike out, so I ran a few errands in the car before getting to Kindergarten quite early.

Zoe was fast asleep, but I let her have a long, slow wake up, and that made our departure much easier. She got to have a little hold of one of the baby chicks before we left. Today I learned that baby chicks smell absolutely divine.

We got home, and Zoe did some self-directed craft for a bit, and then wanted to play hide and seek, so we did that and finally got around to looking at all of the Woolworths DreamWorks Heroes cards she's been collecting. I was disappointed to discover there's only a single card per pack, so that's going to mean I have to spend at least $840 on groceries, excluding duplicates, before we get all of them. I'm glad the checkout operators aren't sticking to the rules and are handing them out a little more generously than that.

After that, we did a bit more rough and tumble play, and then it was time to start making dinner, so Zoe watched a DVD.

We got dinner out of the way relatively early, so I practiced plaiting her hair (I've surprised myself with the half-decent job I can do) and then did bath time and bed time.

Bed time was a little protracted (she didn't like her bedroom and wanted to sleep in my bed) but otherwise seems to have gone smoothly.

Categories: FLOSS Project Planets

Sylvain Le Gall: Release of OASIS 0.4.3

Mon, 2014-03-24 20:21

I am happy to announce the release of OASIS v0.4.3.

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Here is a quick summary of the important changes:

  • Added -remove switch to the setup-clean subcommand designed to remove unaltered generated files completely, rather than simply emptying their OASIS section.
  • Translate path of ocamlfind on Windows to be bash/win32 friendly.
  • Description is now parsed in a more structured text (para/verbatim).


  • stdfiles_markdown (alpha): set default extension of StdFiles (AUTHORS, INSTALL, README) tp be '.md'. Use markdown syntax for standard files. Use comments that hides OASIS section and digest. This feature should help direct publishing on GitHub.
  • disable_oasis_section (alpha): it allows DisableOASISSection to be specified in the package with a list of expandable filenames given. Any generated file specified in this list doesn't get an OASIS section digest or comment headers and footers and is therefore regenerated each time `oasis setup` is run (and any changes made are lost). This feature is mainly intended for use with StdFiles so that, for example, INSTALL.txt and AUTHORS.txt (which often won't be modified) can have the extra comment lines removed.
  • compiled_setup_ml (alpha): allow to precompile to speedup.

This new version closes 4 bugs, mostly related to parsing of _oasis. It also includes a lot of refactoring to improve the overall quality of OASIS code base.

The big project for the next release will be to setup a Windows host for regular build and test on this platform. I plan to use WODI for this setup.

I would like to thanks again the contributor for this release: David Allsopp, Martin Keegan and Jacques-Pascal Deplaix. Their help is greatly appreciated.

Categories: FLOSS Project Planets

Andrew Shadura: Tired of autotools? Try this: mk-configure

Mon, 2014-03-24 17:30

mk-configure is a project which tries to be autotools done right. Instead of supporting an exceedingly large number of platforms, modern and ancient, at costs of generated unreadable multi-kilobyte shell scripts, mk-configure aims at better support of less platforms, but those which are really in use today. One of the main differences of this project is that it avoids code generation as much as possible. The author of mk-configure, Aleksey Cheusov, a NetBSD hacker from Belarus, uses NetBSD make (bmake) and shell script snippets instead of monstrous libraries written in m4 interleaved with shell scripts. As the result, there’s no need in a separate step of package configuration or bootstrapping the configure script, everything is done by just running bmake, or a convenience wrapper for it, mkcmake, which prepends a proper library path to bmake arguments, so you don’t have to specify it yourself.

Today, mk-configure is already powerful enough to be able replace autotools for most of the projects, and what is missing from it can be easily done by hacking the Makefile, which would otherwise be quite simple.

Try it for your project, you may really like it. I already did. And report bugs.

Categories: FLOSS Project Planets

Steinar H. Gunderson: Detecting Windows XP in dhcpd.conf

Mon, 2014-03-24 17:23

With Windows XP going out of security support on April 8th, it can be useful to detect XP clients on your local networks so you can shut them down or upgrade them to something (more) secure. A natural point would be the DHCP server (e.g. so you can quarantine the machines to a separate network), but it turns out that this it isn't immediately obvious how to detect them.

DHCP has a “client ID” field in the request packet that's meant for exactly things like this, but unfortunately, Microsoft has used “MSFT 5.0” in every version of Windows since 2000 onwards (thanks, Microsoft), so that's not enough to distinguish it from e.g. Windows 7. However, from Vista onwards, the client started to ask for a new DHCP option (a standards-conforming for static routes that replaced an earlier Microsoft-spceific one), namely number 121, and that means we can write an ISC DHCPD matching statement:

class "WinXP" { match if (option vendor-class-identifier="MSFT 5.0") and not (concat(",", binary-to-ascii(10, 8, ",", option dhcp-parameter-request-list), ",") ~~ ",121,"); log(info, concat("Found Windows XP client: ", binary-to-ascii(16, 8, ":", hardware))); }

Once you have the class, you can do whatever; look at the DHCP logs manually, use the execute functionality to trigger an alarm, refuse to hand out addresses, and so on. Good luck on your road to XP freedom :-)

Categories: FLOSS Project Planets

Michael Prokop: kamailio-deb-jenkins: Open Source Jenkins setup for Debian Packaging

Mon, 2014-03-24 17:13

Kamailio is an Open Source SIP Server. Since beginning of March 2014 a new setup for Kamailio‘s Debian packages is available. Development of this setup is sponsored by Sipwise and I am responsible for its infrastructure part (Jenkins, EC2, jenkins-debian-glue).

The setup includes support for building Debian packages for Debian 5 (lenny), 6 (squeeze), 7 (wheezy) and 8 (jessie) as well as Ubuntu 10.04 (lucid) and 12.04 (precise), all of them architectures amd64 and i386.

My work is fully open sourced. Deployment instructions, scripts and configuration are available at kamailio-deb-jenkins, so if you’re interested in setting up your own infrastructure for Continuous Integration with Debian/Ubuntu packages that’s a very decent starting point.

NOTE: I’ll be giving a talk about Continuous Integration with Debian/Ubuntu packages at Linuxdays Graz/Austria on 5th of April. Besides kamailio-deb-jenkins I’ll also cover best practices, Debian packaging, EC2 autoscaling,…

Categories: FLOSS Project Planets

Michael Prokop: Building Debian+Ubuntu packages on EC2

Mon, 2014-03-24 17:13

In a project I recently worked on we wanted to provide a jenkins-debian-glue based setup on Amazon’s EC2 for building Debian and Ubuntu packages. The idea is to keep a not-so-strong powered Jenkins master up and running 24×7, while stronger machines serving as Jenkins slaves should be launched only as needed. The project setup in question is fully open sourced (more on that in a separate blog post), hereby I am documenting the EC2 setup in usage.

Jenkins master vs. slave

Debian source packages are generated on Jenkins master where a checkout of the Git repository resides. The Jenkins slaves do the actual workload by building the binary packages and executing piuparts (.deb package installation, upgrading, and removal testing tool) on the resulting binary packages. The Debian packages (source+binaries) are then provided back to Jenkins master and put into a reprepro powered Debian repository for public usage.


The starting point was one of the official Debian AMIs (x86_64, paravirtual on EBS). We automatically deployed jenkins-debian-glue on the system which is used as Jenkins master (we chose a m1.small instance for our needs).

We started another instance, slightly adjusted it to already include jenkins-debian-glue related stuff out-of-the-box (more details in section “Reduce build time” below) and created an AMI out of it. This new AMI ID can be configured for usage inside Jenkins by using the Amazon EC2 Plugin (see screenshot below).

IAM policy

Before configuring EC2 in Jenkins though start by adding a new user (or group) in AWS’s IAM (Identity and Access Management) with a custom policy. This ensures that your EC2 user in Jenkins doesn’t have more permissions than really needed. The following policy should give you a starting point (we restrict the account to allow actions only in the EC2 region eu-west-1, YMMV):

{ "Version": "2012-10-17", "Statement": [ { "Action": [ "ec2:CreateTags", "ec2:DescribeInstances", "ec2:DescribeImages", "ec2:DescribeKeyPairs", "ec2:GetConsoleOutput", "ec2:RunInstances", "ec2:StartInstances", "ec2:StopInstances", "ec2:TerminateInstances" ], "Effect": "Allow", "Resource": "*", "Condition": { "StringEquals": { "ec2:Region": "eu-west-1" } } } ] } Jenkins configuration

Configure EC2 access with “Access Key ID”, “Secret Access Key”, “Region” and “EC2 Key Pair’s Private Key” (for SSH login) inside Jenkins in the Cloud section on $YOUR_JENKINS_SERVER/configure. Finally add an AMI in the AMIs Amazon EC2 configuration section (adjust security-group as needed, SSH access is enough):

As you can see the configuration also includes a launch script. This script ensures that slaves are set up as needed (provide all the packages and scripts that are required for building) and always get the latest configuration and scripts before starting to serve as Jenkins slave.

Now your setup should be ready for launching Jenkins slaves as needed:

NOTE: you can use the “Instance Cap” configuration inside the advanced Amazon EC2 Jenkins configuration section to place an upward limit to the number of EC2 instances that Jenkins may launch. This can be useful for avoiding surprises in your AWS invoices. Notice though that the cap numbers are calculated for all your running EC2 instances, so be aware if you have further machines running under your account, you might want to e.g. further restrict your IAM policy then.

Reduce build time

Using a plain Debian AMI and automatically installing jenkins-debian-glue and further jenkins-debian-glue-buildenv* packages on each slave startup would work but it takes time. That’s why we created our own AMI which is nothing else than an official Debian AMI with the script (which is referred to in the screenshot above) already executed. All the necessary packages are pre-installed and also all the cowbuilder environments are already present then. From time to time we start the instance again to apply (security) updates and execute the bootstrap script with its &dash&dashupdate option to also have all the cowbuilder systems up2date. Creating a new AMI is a no-brainer and we can then use the up2date system for our Jenkins slaves, if something should break for whatever reason we can still fall back to an older known-to-be-good AMI.

Final words

How to set up your Jenkins jobs for optimal master/slave usage, multi-distribution support (Debian/Ubuntu) and further details about this setup are part of another blog post.

Thanks to Andreas Granig, Victor Seva and Bernhard Miklautz for reading drafts of this.

Categories: FLOSS Project Planets

Steve Kemp: New GPG-key

Mon, 2014-03-24 14:26

I've now generated a new GPG-key for myself:

$ gpg --fingerprint 229A4066 pub 4096R/0C626242 2014-03-24 Key fingerprint = D516 C42B 1D0E 3F85 4CAB 9723 1909 D408 0C62 6242 uid Steve Kemp (Edinburgh, Scotland) <> sub 4096R/229A4066 2014-03-24

The key can be found online via : 0x1909D4080C626242

This has been signed with my old key:

pub 1024D/CD4C0D9D 2002-05-29 Key fingerprint = DB1F F3FB 1D08 FC01 ED22 2243 C0CF C6B3 CD4C 0D9D uid Steve Kemp <> sub 2048g/AC995563 2002-05-29

If there is anybody who has signed my old key who wishes to sign my new one then please feel free to get in touch to arrange it.

Categories: FLOSS Project Planets

Elena 'valhalla' Grandi: Moving away from google services, step N out of M

Mon, 2014-03-24 10:11
For years, I've used a gmail account to subscribe to public mailing lists because it was convenient, it had excellent spam filtering and since the lists were already available on google-indexed public archive I felt that there was no real privacy issue.
I already have a local copy of every message in the account, in case I lose access to it for some random reason, and my real contacts already know and use my other email address(es), so I'm not really worried about continuing to use gmail for this kind of traffic.

When I started contributing to Debian I also used that email address, because I was already subscribed to a few mailing lists under it, and it was going to end up being used on the BTS and other spam-attracting public places like that, but I was less happy with the choice.

Lately, however, gmail's spam filters have started to behave oddly, with lots of false positives including a number of messages from addresses, and this basically removes the main reason why I was using gmail in the first place, so I'm seriously considering dropping it also for publicly indicized communications.

My first step, right before leaving for Barcelona was to create a couple other email addresses, one for mailing lists and the other for debian-related work, and add them to my gpg/OpenPGP key, so that any new signature on them from the conf would also include the new UIDs.

If you've received a badly signed message (thanks heirloom mailix for mangling stuff :) ) followed by an hopefully correctly signed one where I was asking you to sign new UIDs, and you were wondering, this is the reason.

Now that the new UIDs have been signed by a few DDs I'm going to start using the new address in Debian: I believe I have changed it everywhere I needed to, except for the maintainer/uploader field for my packages, where it will change on their next uploads.

As for mailing lists, I still don't know how to proceed: I could proceed in a systematic way, but I'm lazy and I will probably just change a few main subscriptions, use the new address for new subscriptions and then migrate the rest a few at a time, unless some other event happens and I'll have to hurriedly migrate everything or risk losing important messages. Even in the latter case, I don't need any information that is only stored on google's servers, so I won't will panic just a bit :)

Now, where did I put that documentation about xmpp servers?
Categories: FLOSS Project Planets

Michal &#268;iha&#345;: GSoC 2014 applications for phpMyAdmin

Mon, 2014-03-24 06:00

As usual, I look at the application stats for phpMyAdmin just after student application period of Google Summer of Code is over.

First of all we got more proposals than in last years, this time there is way more students from India and discussions on mentors lists shows this is quite similar for other projects. Maybe it's just different timing which works better for students there, but there might be different reasons as well. There is also quite low number of spam or bogus proposals.

Same as in past years, people leave the submission to the last moment, even though we encourage them to submit early so that they can adjust the application based on our feedback.

Anyway we're just working on evaluation and will finalize it in upcoming days. Of course you will know the results from Google on April 21st.

Filed under: English phpMyAdmin | 0 comments | Flattr this!

Categories: FLOSS Project Planets

Steve Kemp: So I failed at writing some clustered code in Perl

Mon, 2014-03-24 04:41

Until this time next month I'll be posting code-based discussions only.

Recently I've been wanting to explore creating clustered services, because clusters are definitely things I use professionally.

My initial attempt was to write an auto-clustering version of memcached, because that's a useful tool. Writing the core of the service took an hour or so:

  • Simple implementation.
  • Give it the obvious methods get, set, delete.
  • Make it more interesting by creating a read-only append-log.
  • The logfile will be replayed for clustering.

At the point I was done the following code worked:

use KeyVal; # Create an object, and set some values my $obj = KeyVal->new( logfile => "/tmp/foo.log" ); $obj->incr( "steve" ); $obj->incr( "steve" ); print $obj->get( "steve" ) # prints 2. # Now replay the append-only log my $replay = KeyVal->new( logfile => "/tmp/foo.log" ); $replay->replay(); print $replay->get( "steve" ) # prints 2.

In the first case we used the primitives to increment a value twice, and then fetch it. In the second case we used the logfile the first object created to replay all prior transactions, then output the value.

Neat. The next step was to make it work over a network. Trivial.

Finally I wanted to autodetect peers, and deploy replication. Each host would send out regular messages along the lines of "Do you have updates made since $time?". Any that did would replay the logfile from the given unixtime offset.

However here I ran into problems. Peer discovery was supposed to be basic, and I figured I'd write something that did leader election by magic. Unfortunately Perls threading code is .. unpleasant:

  • I wanted to store all known-peers in a singleton.
  • Then I wanted to create threads that would announce and receive updates.

This failed. Majorly. Because you cannot launch the implementation of a class-method as a thread. Equally you cannot make a variable which is "complex" shared across threads.

I wrote some demo code which works without packages and a shared singleton:

The Ruby version, by contrast, is much more OO and neater. Meh.

I've now shelved the project.

My next, big, task was to make the network service utterly memcached compatible. That would have been fiddly, but not impossible. Right now I just use a simple line-based network protocol.

I suspect I could have got what I wanted using EventMachine, or similar, but that's a path I've not yet explored, and I'm happy enough with that decision.

Categories: FLOSS Project Planets

Andrew Pollock: [life] Day 55, Kindergarten, run, Debian

Sun, 2014-03-23 23:42

I got up this morning with the intent of knocking out a 10km run. I managed to last 8km today, so it's an improvement, but I don't know what's up with my running fitness at the moment.

After that, I pretty much did Debian stuff all day. I managed an upload of dstat and found a potential security bug in another of my packages when I was trying to update it, so I raised that issue with the package's upstream.

I also mostly sorted out opening a bank account for my company. I just have to visit the branch in person tomorrow.

Sarah had indicated to me that Zoe had slept poorly last night, on top of a big weekend, and that I should probably pick her up in the car, so I drove to Kindergarten expecting to find her fast asleep and not take too kindly to being woken. Instead, she was wide awake, not having napped at all.

The highlight of her day was they had some baby chickens at Kindergarten. They had four day-old hatchlings, with more eggs in an incubator.

Megan wanted Zoe to have a coffee with her, so we stopped at the local coffee shop, with her Dad and little sister, for a babyccino on the way home.

I had a pretty big weekend away, and didn't feel up to doing the grocery shopping yesterday afternoon when I got home, so we went to the supermarket on the way home to do the weekly grocery shop. After we got home, I got stuck into making dinner while Zoe watched TV.

My girlfriend came around after work and joined us for dinner, and the three of us had a nice dinner together.

Zoe started showing signs of being particularly tired during dinner, and was a bit uncooperative around bath time, but we got through it all, and I managed to get her down to bed a little bit early, and she fell asleep without too much trouble. It's a fairly warm night. Hopefully she'll sleep well.

Categories: FLOSS Project Planets

Matthew Garrett: What free software means to me

Sun, 2014-03-23 23:28
I was awarded the Free Software Foundation Award for the Advancement of Free Software this weekend[1]. I'd been given some forewarning, and I spent a bunch of that time thinking about how free software had influenced my life. It turns out that it's a lot.

I spent most of the 90s growing up in an environment that was rather more interested in cattle than in computers, and had very little internet access during that time. My entire knowledge of the wider free software community came from a couple of CDs that contained a copy of the jargon file, the source code to the entire GNU project and an early copy of the m68k Linux kernel.

But that was enough. Before I'd even got to university, I knew what free software was. I'd had the opportunity to teach myself how an operating system actually worked. I'd seen the benefits of being able to modify software and share those modifications with others. I met other people with the same interests. I ended up with a job writing free software and collaborating with others on integrating it with upstream code. And, from there, I became more and more involved with a wider range of free software communities, finding an increasing number of opportunities to help make changes that benefited both me and others.

Without free software I'd have started years later. I'd have lost the opportunity to collaborate with people spread over the entire world. My first job would have looked very different, as would my entire career since then. Without free software, almost everything I've achieved in my adult life would have been impossible.

To me, free software means I've lived a significantly better life than would otherwise have been the case. But more than that, it means doing what I can to make sure that other people have the same opportunities. I am here because of the work of others. The most rewarding part of my continued involvement is the knowledge that I am part of a countless number of people working to make sure that others can tell the same story in future.

[1] I'd link to the actual press release, but it contains possibly the worst photograph of me in the entire history of the universe

Categories: FLOSS Project Planets

Russ Allbery: Term::ANSIColor 4.03

Sun, 2014-03-23 20:49

This is a fairly small Perl module that provides a more convenient interface to the ANSI color escape sequences.

The primary change in this release is interesting for me but not so much for anyone else. It's the first of my core Perl modules that I've converted to Module::Build and to the new Perl test infrastructure that's now maintained in rra-c-util. (Yes, I know that Module::Build is apparently going to be dropped from Perl core, but the package also generates a Makefile.PL for backward compatibility.)

Starting with this release, all my subsequent package releases will start using the Lancaster Consensus environment variables to control whether to run non-default tests (namely AUTOMATED_TESTING, RELEASE_TESTING, and AUTHOR_TESTING). Hopefully this won't cause me too many problems. I'm currently setting AUTHOR_TESTING unconditionally, since I really want to see the results of those tests for all my code, but it's possible that will cause me too many problems with other people's code. (It would have been nice if the spec for AUTHOR_TESTING would let you set the value of the variable to the identity of the author whose tests you want run.)

I like having all my release tests run by automated testing so that I can identify any problems with the code to conditionally skip them, so I enable all the release tests when AUTOMATED_TESTING is set. This is probably peculiar to me.

The other changes in this release are all documentation and test suite fixes. There are no code changes in this release. Thanks to Olivier Mengué and David Steinbrunner for various bug reports.

You can get the latest release from the Term::ANSIColor distribution page.

Categories: FLOSS Project Planets

Chris Lamb: Fingerspitzengefühl

Sun, 2014-03-23 18:53

Loanwords can often appear more insightful than they really are. How prescient of another culture to codify such a concept into a single word! They surely must have lofty and perceptive discussions if it was necessary to coin and codify one.

But whilst there is always the danger of over-inflating the currency of the loanword—especially the compound one—there must be a few that are worth the trouble.

One such term is fingerspitzengefühl. Literally meaning "finger-tips feeling" in German, it attempts to capture the idea of an intuitive sophistication, flair or instinct. Someone exhibiting fingerspitzengefühl would be able to respond appropriately, delicately and tactfully to certain things or situations.

Oliver Reichenstein clarifies the distinction from a personal taste:

Whether I like pink or not, sugar in my coffee, red or white wine, these things are a matter of personal taste. These are personal preferences, and both designers and non-designers have them. This is the taste we shouldn't bother discussing.

Whether I set a text’s line height to 100% or 150% is not a matter of taste, it is a matter of knowing the principles of typography.

However, whether I set a text’s line height at 150% or 145% is a matter of Fingerspitzengefühl; wisdom in craft, or sophistication.

Fingerspitzengefühl is therefore not innate and is probably refined over the years via subconscious—rather than conscious—study and reflection. However, it always flows naturally in the moment, not dissimilar to Castiglione's sprezzatura.

Personally, I am particularly enamoured how this concept of a "trained taste" appears to blur the line between an objective and subjective aesthetic, putting me somewhat at odds with those who baldly assert that taste is "obviously" entirely individual.

Categories: FLOSS Project Planets

Gregor Herrmann: RC bugs 2013/49 - 2014/12

Sun, 2014-03-23 16:42

since people keep talking to me about my RC bug fixing activities, I thought it might be time again for a short report. to be honest, I mostly stopped my almost daily work at some point in december, partly because the overall number of RC bugs affecting both testing & unstable is quite low (& therefore the number of easy-to-fix bugs), due to the auto-removal policy of the release team (kudos!). – but I still kept track about RC bugs I worked on, & here's the list; as you can see, mostly pkg-perl bugs …

  • #711430 – src:libcgi-cookie-splitter-perl: "libcgi-cookie-splitter-perl: FTBFS with perl 5.18: test failures"
    add patch from CPAN RT (pkg-perl)
  • #711436 – src:libdbix-abstract-perl: "libdbix-abstract-perl: FTBFS with perl 5.18: test failures"
    upload new upstream release (pkg-perl)
  • #711444 – src:libgraph-readwrite-perl: "libgraph-readwrite-perl: FTBFS with perl 5.18: test failure"
    upload new upstream release (pkg-perl)
  • #711620 – src:libthread-queue-any-perl: "libthread-queue-any-perl: FTBFS with perl 5.18: test failures"
    upload new upstream release (pkg-perl)
  • #724137 – src:libschedule-cron-perl: "libschedule-cron-perl: FTBFS: Tests failures"
    upload new upstream release
  • #725570 – src:pgfincore: "build postgresql-9.3 extension only"
    send improved debdiff to the bug, thanks to Martin Pitt
  • #730906 – src:libredis-perl: "libredis-perl: FTBFS: Failed tests"
    upload new upstream release (pkg-perl)
  • #730925 – src:libgeo-ip-perl: "libgeo-ip-perl: FTBFS: Failed tests"
    upload new upstream release (pkg-perl)
  • #731794 – libanyevent-perl: "FBFTS: AnyEvent fails t/81_hosts.t on system with OpenDNS"
    upload new upstream release prepared by Xavier Guimard (pkg-perl)
  • #733364 – src:libjxp-java: "libjxp-java: FTBFS: [javac] /«PKGBUILDDIR»/src/java/org/onemind/jxp/servlet/ error: package javax.servlet does not exist"
    add missing build dependency (pkg-java)
  • #733379 – src:libonemind-commons-java-java: "libonemind-commons-java-java: FTBFS: [javac] /«PKGBUILDDIR»/src/java/org/onemind/commons/java/util/ error: package javax.servlet does not exist"
    add missing build dependency (pkg-java)
  • #733429 – src:libcatalyst-modules-perl: "libcatalyst-modules-perl: FTBFS: Tests failed"
    upload new upstream release (pkg-perl)
  • #735054 – src:libpoet-perl: "libpoet-perl: FTBFS: test failures: t/App.t"
    add missing (build) dependency (pkg-perl)
  • #735367 – bti: "bti: stopped working after SSL/TLS traffic restriction, due to obsolete URL"
    upload new upstream release
  • #735823 – src:libgtk3-perl: "libgtk3-perl: FTBFS: Tests failures"
    set HOME for tests (pkg-perl)
  • #736275 – libmarc-xml-perl: "libmarc-xml-perl: CVE-2014-1626: XML External Entity privilege escalation"
    upload new upstream release (pkg-perl)
  • #736718 – libalien-sdl-perl: "libalien-sdl-perl: depends on libtiff4 which is going away"
    fix dependencies (pkg-perl)
  • #736820 – libnet-frame-perl: "libnet-frame-perl: package seems to depend on things that are not packaged"
    upload new upstream release prepared by Daniel Lintott (pkg-perl)
  • #736866 – src:libjpfcodegen-java: "[src:libjpfcodegen-java] Sourceless file (minified)"
    drop sourceless minified .js files (pkg-java)
  • #737739 – src:mumble: "mumble: CVE-2014-0044 CVE-2014-0045"
    sponsor security upload for Chris Knadle
  • #738419 – src:libsvn-hooks-perl: "libsvn-hooks-perl: FTBFS: Tests failures"
    fix (build) dependencies (pkg-perl)
  • #738431 – src:latex-make: "latex-make: FTBFS: Font T1/cmr/m/n/10=ecrm1000 at 10.0pt not loadable: Metric (TFM) file not found"
    propose patch, then fixed by maintainer
  • #739144 – src:librpc-xml-perl: "librpc-xml-perl: FTBFS: Test failure"
    upload new upstream release (pkg-perl)

ps: the how-can-i-help package is a nice tool for finding RC bugs in packages you care about. install it if you haven't so far!

Categories: FLOSS Project Planets

Dominique Dumont: Easier Lcdproc package upgrade with automatic configuration merge

Sun, 2014-03-23 13:42


This blog explains how next lcdproc package provide easier upgrader with automatic configuration merge.

Here’s the current situation: lcdproc is shipped with several configuration files, including /etc/LCDd.conf. This file is modified upstream at every lcdproc release to bring configuration for new lcdproc drivers. On the other hand, this file is always customized to suit the specific hardware of the user’s system. So upgrading a package will always lead to a conflict during upgrade. User will always be required to choose whether to use current version or upstream version.

Next version of libconfig-model-lcdproc-perl will propose user whether to perform automatic merge of the configuration: upstream change are taken into account while preserving user change.

The configuration upgrade shown is based on Config::Model can be applied to other package.

Current lcdproc situation

To function properly, lcdproc configuration must always be adapted to suit the user’s hardware. On the following upgrade, upstream configuration is often updated so user will often be shown this question:

Configuration file '/etc/LCDd.conf' ==> Modified (by you or by a script) since installation. ==> Package distributor has shipped an updated version. What would you like to do about it ? Your options are: Y or I : install the package maintainer's version N or O : keep your currently-installed version D : show the differences between the versions Z : start a shell to examine the situation The default action is to keep your current version. *** LCDd.conf (Y/I/N/O/D/Z) [default=N] ?

This question is asked in the middle of an upgrade and can be puzzling for an average user.

Next package with automatic merge

Starting from lcdproc 0.5.6, the configuration merge is handled automatically by the packaging script with the help of Config::Model::Lcdproc.

When lcdproc is upgraded to 0.5.6, the following changes are visible:
* lcdproc depends on libconfig-model-lcdproc-perl
* user is asked once by debconf whether to use automatic configuration upgrades or not.
* no further question are asked (no ucf style questions).

For instance, here’s an upgrade from lcdproc_0.5.5 to lcdproc_0.5.6:

$ sudo dpkg -i lcdproc_0.5.6-1_amd64.deb (Reading database ... 322757 files and directories currently installed.) Preparing to unpack lcdproc_0.5.6-1_amd64.deb ... Stopping LCDd: LCDd. Unpacking lcdproc (0.5.6-1) over (0.5.5-3) ... Setting up lcdproc (0.5.6-1) ... Changes applied to lcdproc configuration: - server ReportToSyslog: '' -> '1' # use standard value update-rc.d: warning: start and stop actions are no longer supported; falling back to defaults Starting LCDd: LCDd. Processing triggers for man-db (2.6.6-1) ...

Note: the automatic upgrade currently applies only to LCDd.conf. The other configuration files of lcdproc are handled the usual way.

Other benefits

User will also be able to:
* check lcdproc configuration with sudo cme check lcdproc
* edit the configuration with a GUI (see Managing Lcdproc configuration with cme for more details)

Here’s a screenshot of the GUI:

More information

* libconfig-model-lcdproc-perl package page. This package provides a configuration model for lcdproc.
* This blog explains how this model is generated from upstream LCDd.conf.
* How to adapt a package to perform configuration upgrade with Config::Model

Next steps

Automatic configuration merge can be applied to other packages. But my free time is already taken by the maintenance of Config::Model and the existing models, there’s no room for me to take over another package.

On the other hand, I will definitely help people who want to provide automatic configuration merge on their packages. Feel free to contact me on:
* config-model-user mailing list
* debian-perl mailing list (where Config::Model is often used to maintain debian package file with cme)
* #debian-perl IRC channel

All the best

Tagged: config-model, configuration, debian, Perl, upgrade
Categories: FLOSS Project Planets