Planet Debian

Syndicate content
Planet Debian - http://planet.debian.org/
Updated: 3 hours 22 min ago

Craig Small: IPv6 and bridges

Wed, 2014-10-01 07:54

I’ve reported a bug on bridge-utils, but perhaps someone has already seen this and has a fix. My virtual IPv6 machines often lose connectivity from time to time. Tracking this down, it seems that the router sends Neighbor Solicitations (IPv6 ARPs basically). The physical interface of the bridge group receives it, but the vnet0 one does not.

Using tshark I can see the pings on vnet0 but on br0 and eth1 I see the ping requests and the NS packets. So there is something odd going on with the bridge interface.

If I remove and add the vnet0 interface from the bridge group, the connectivity comes back.

Categories: FLOSS Project Planets

Holger Levsen: 20141001-lts-september-2014

Wed, 2014-10-01 04:04
My LTS September

In the beginning of September I spent quite some time fixing bugs in the Debian Security Tracker, which now, thanks to the awesome CSS from Ulrike looks really good and professional! There are still some bugs to fix and features I'd like to add, eg. the ability to in- and exclude (old)oldstable/lts/backports/nodsa/EOL everywhere. It was fun to squash #742382 #642987 #742855 #762214 #479727 #610220 #611163 and #755800!

And then I also discovered dgit, as in "I've used it for the first time". It was so great, I immediatly did a backport of it and uploaded it to wheezy-backports.

So during the last month these uploads I made to squeeze-lts:

  • DLA 56-1 for wordpress, fixing CVE-2014-2053 CVE-2014-5204 CVE-2014-5205 CVE-2014-5240 CVE-2014-5265 CVE-2014-5266
  • DLA 57-1 for libstruts1.2-java, fixing CVE-2014-0114
  • DLA 60-1 for icinga, fixing CVE-2013-7108 and CVE-2014-1878
  • DLA 61-1 for libplack-perl, fixing CVE-2014-5269
  • DLA 62-1 for nss, fixing CVE-2014-1568
  • DLA 66-1 for apache2, fixing CVE-2013-6438 CVE-2014-0118 CVE-2014-0226 CVE-2014-0231

Plus I filed #762715, asking the devscripts maintainers to 'add an --lts option to dch' and #763339 against lintian: please 'recognize "squeeze-lts" as suite'.

Here's three things you could do to contribute to Debian LTS:

Thanks to everybody supporting LTS already!

Categories: FLOSS Project Planets

Keith Packard: chromium-dri3

Wed, 2014-10-01 01:51
Chromium (the browser) and DRI3

I got a note on IRC a week ago that Chromium was crashing with DRI3.

The Google team working on Chromium eventually sent me a link to the bug report. That's secret Google stuff, so you won't be able to follow the link, even though it's a bug in a free software application when running on free software drivers.

There's a bug report in the freedesktop bugzilla which looks the same to me.

In both cases, the recommended “fix” was to switch from DRI3 back to DRI2. That's not exactly a great plan, given that DRI3 offers better security between GPU-using applications, which seems like a pretty nice thing to have when you're running random GL applications from the web.

Chromium Sandboxing

I'm not entirely sure how it works, but Chromium creates a process separate from the main browser engine to talk to the GPU. That process has very limited access to the operating system via some fancy library adventures. Presumably, the hope is that security bugs in the GL driver would be harder to leverage into a remote system exploit.

Debugging in this environment is a bit tricky as you can't simply run chromium under gdb and expect to be able to set breakpoints in the GL driver. Instead, you have to run chromium with a magic flag which causes the GPU process to pause before loading the driver so you can connect to it with gdb and debug from there, along with a flag that lets you see crashes within the gpu process and the usual flag that causes chromium to ignore the GPU black list which seems to always include the Intel driver for one reason or another:

$ chromium --gpu-startup-dialog --disable-gpu-watchdog --ignore-gpu-blacklist

Once Chromium starts up, it will print out a message telling you to attach gdb to the GPU process and send that process a SIGUSR1 to continue it. Now you can happily debug and get a stack trace when the crash occurs.

Locating the Bug

The bug manifested with a segfault at the first access to a DRI3-allocated buffer within the application. We've seen this problem in the past; whenever buffer allocation fails for some reason, the driver ignores the problem and attempts to de-reference through the (NULL) buffer pointer, causing a segfault. In this case, Chromium called glClear, which tried (and failed) to allocate a back buffer causing the i965 driver to subsequently segfault.

We should probably go fix the i965 driver to not segfault when buffer allocation fails, but that wouldn't provide a lot of additional information. What I have done is add some error messages in the DRI3 buffer allocation path which at least tell you why the buffer allocation failed. That patch has been merged to Mesa master, and should also get merged to the Mesa stable branch for the next stable release.

Once I had added the error messages, it was pretty easy to see what happened:

$ chromium --ignore-gpu-blacklist [10618:10643:0930/200525:ERROR:nss_util.cc(856)] After loading Root Certs, loaded==false: NSS error code: -8018 libGL: pci id for fd 12: 8086:0a16, driver i965 libGL: OpenDriver: trying /local-miki/src/mesa/mesa/lib/i965_dri.so libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted. libGL: Can't open configuration file /home/keithp/.drirc: Operation not permitted. libGL error: DRI3 Fence object allocation failure Operation not permitted

The first two errors were just the sandbox preventing Mesa from using my GL configuration file. I'm not sure how that's a security problem, but it shouldn't harm the driver much.

The last error is where the problem lies. In Mesa, the DRI3 implementation uses a chunk of shared memory to hold a fence object that lets Mesa know when buffers are idle without using the X connection. That shared memory segment is allocated by creating a temporary file using the O_TMPFILE flag:

fd = open("/dev/shm", O_TMPFILE|O_RDWR|O_CLOEXEC|O_EXCL, 0666);

This call “cannot fail” as /dev/shm is used by glibc for shared memory objects, and must therefore be world writable on any glibc system. However, with the Chromium sandbox enabled, it returns EPERM.

Running Without a Sandbox

Now that the bug appears to be in the sandboxing code, we can re-test with the GPU sandbox disabled:

$ chromium --ignore-gpu-blacklist --disable-gpu-sandbox

And, indeed, without the sandbox getting in the way of allocating a shared memory segment, Chromium appears happy to use the Intel driver with DRI3.

Final Thoughts

I looked briefly at the Chromium sandbox code. It looks like it needs to know intimate details of the OpenGL implementation for every possible driver it runs on; it seems to contain a fixed list of all possible files and modes that the driver will pass to open(2). That seems incredibly fragile to me, especially when used in a general Linux desktop environment. Minor changes in how the GL driver operates can easily cause the browser to stop working.

Categories: FLOSS Project Planets

Vincent Sanders: It is a bad plan that admits of no modification

Tue, 2014-09-30 20:05
I find it somewhat interesting that thousands of years later that our society still uses Publilius Syrus sententiae though I imagine the tendency to leave well enough alone means such phrases stay in usage.

One weekend Steve McIntyre asked me if I could find a source of some of some 40mm fans for some systems with some pretty strict requirements. They needed to be long life and shift a lot of air to combat a persistent overheating issue.

I sat with him and went through the Farnell utterly hateful parametric web interface and eventually came up with a couple of options which were very expensive. Only then did I stop and ask what the actual problem was.

Steve showed me one of the Debian ARM buildd boxes which are Marvell development machines. These systems are powerful quad core machines housed in compact steel enclosures.

There is a single 40mm fan trying to provide cooling for the entire enclosure. When the units are placed horizontally and used intermittently this proves adequate. Unfortunately when the system are arranged vertically in a rack and run at full load continuously they often overheat and have to be restarted. In addition the small high speed fans need replacing frequently as their bearings wore out quickly.

This was obviously causing some issues for the ARM Debian ports which Steve wanted to rectify. After talking the problem through for a while we came to the conclusion we could use much larger 60mm fans to blow air directly through the top of the case onto the cpu heatsink.

Larger fans can be run much more slowly to move a similar volume of air to the smaller 40mm fans which gives a much longer service life.

Steve proceeded to order enough parts to allow us to modify all the Debian systems, this worked out cheaper than a single "special" 40mm high volume fan.

I acquired a rather large steel hole punch, I chose this tool because it produces a much superior finish to a hole cutter and this project demanded a high level of finish (not to mention I loved having a valid excuse to own and use a huge allen key!)

If we had simply been modifying a single case I would have measured and marked up by hand. With the prospect of altering at least eight I laser cut a template from plywood which Andy Simpkins took great glee in excessively annotating.

We also used the opportunity to add bolt holes to securely attach the 2.5 inch SATA drives instead of using sticky pads.

Steve and I modified a single system to begin with both to check our alignment and the efficacy of the change. We were pleasantly surprised to discover that hoiby could now repeatedly do kernel compiles with all four cores flat out which was not possible before. The measured CPU temperature, which had previously been around 90°C, did not rise above 40°C

Steve, Andy and I then arranged a day where we took all the remaining units out of the rack at ARM, modified and returned them. We used the facilities at the Cambridge Makespace where I am a member to do the modifications.

I broke two 3mm drill bits and dulled a 4mm bit drilling all the holes, Roger Smith was good enough to loan us the use of his "Christmas tree bit" to ream the fan hole out to 16mm so we could thread the hole punch and cut the 60mm fan aperture out.

We managed to get quite an assembly line going and, in my opinion, the results look pretty professional.

It has been several months since we did this work and these systems continue to run without issue. To complete the story we can see some graphs courtesy of the DSA munin instance.

You can clearly see the huge drop in temperature at the end of Week 25 despite the continuously high CPU load. Also there is only a single gap in the data after the changes (these indicate crashes where data was not recorded) where before there were frequent and extensive times where the systems were simply unusable.

One reason I continue to enjoy Debian so much is the wide variety of ways in which I can contribute not only by maintaining my packages. Sometimes this kind of work does not receive the credit it deserves and hopefully highlights a small part of the frantic paddling that goes on under the serene surface of the Debian project to keep things "just working".
Categories: FLOSS Project Planets

Junichi Uekawa: Start of fourth quarter this year.

Tue, 2014-09-30 20:04
Start of fourth quarter this year. How is everything going ?

Categories: FLOSS Project Planets

Lisandro Damián Nicanor Pérez Meyer: Qt5 in Jessie: we will release with 5.3.2

Tue, 2014-09-30 12:08
Qt 5.3.2 has entered testing a few hours ago. This will be the version of Qt we will release with Debian Jessie, and it happens to be a nice coincidence, because upstream focused in stability for the 5.3 branch.

I'll now focus in fixing as many bugs as possible and in backporting Qt5 to Wheezy.

Let me warn you: if you are an upstream for a Qt4 based project be sure to be ready to switch to Qt5. If you are a maintainer of a Qt4 based project you better start asking your upstream to be ready for it :)
Categories: FLOSS Project Planets

Gunnar Wolf: Diego Gómez: Imprisoned for sharing

Tue, 2014-09-30 09:01

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles.

Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd.

Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us.

And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics!

Some links with further information:

Categories: FLOSS Project Planets

Russell Coker: Links September 2014

Tue, 2014-09-30 08:55

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Related posts:

  1. Links July 2014 Dave Johnson wrote an interesting article for Salon about companies...
  2. Links May 2014 Charmian Gooch gave an interesting TED talk about her efforts...
  3. Links September 2013 Matt Palmer wrote an insightful post about the use of...
Categories: FLOSS Project Planets

Raphaël Hertzog: My Debian LTS report for September

Tue, 2014-09-30 08:24

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

I started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on cve.mitre.org and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: FLOSS Project Planets

Mario Lang: A simple C++11 concurrent workqueue

Tue, 2014-09-30 07:20

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.

The problem

We want to spread work amongst all available CPU cores. There are no dependencies between items in our work queue. So every thread can just pick up and process an item as soon as it is ready.

The solution

This simple implementation makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining items are processed and all background threads are joined.

The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.

The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.

So the most basic usage example is probably something like:

int main() { typedef std::string item_type; distributor<item_type> process([](item_type &item) { // do work }); while (/* input */) process(std::move(/* item */)); return 0; }

That is about as simple as it can get, IMHO.

The code can be found in the GitHub project mentioned above. However, since the class template is relatively short, here it is.

#include <condition_variable> #include <mutex> #include <queue> #include <stdexcept> #include <thread> #include <vector> template <typename Type, typename Queue = std::queue<Type>> class distributor: Queue, std::mutex, std::condition_variable { typename Queue::size_type capacity; bool done = false; std::vector<std::thread> threads; public: template<typename Function> distributor( Function function , unsigned int concurrency = std::thread::hardware_concurrency() , typename Queue::size_type max_items_per_thread = 1 ) : capacity{concurrency * max_items_per_thread} { if (not concurrency) throw std::invalid_argument("Concurrency must be non-zero"); if (not max_items_per_thread) throw std::invalid_argument("Max items per thread must be non-zero"); for (unsigned int count {0}; count < concurrency; count += 1) threads.emplace_back(static_cast<void (distributor::*)(Function)> (&distributor::consume), this, function); } distributor(distributor &&) = default; distributor &operator=(distributor &&) = delete; ~distributor() { { std::lock_guard<std::mutex> guard(*this); done = true; notify_all(); } for (auto &&thread: threads) thread.join(); } void operator()(Type &&value) { std::unique_lock<std::mutex> lock(*this); while (Queue::size() == capacity) wait(lock); Queue::push(std::forward<Type>(value)); notify_one(); } private: template <typename Function> void consume(Function process) { std::unique_lock<std::mutex> lock(*this); while (true) { if (not Queue::empty()) { Type item { std::move(Queue::front()) }; Queue::pop(); notify_one(); lock.unlock(); process(item); lock.lock(); } else if (done) { break; } else { wait(lock); } } } };

If you have any comments regarding the implementation, please drop me a mail.

Categories: FLOSS Project Planets

Francois Marier: Encrypted mailing list on Debian and Ubuntu

Tue, 2014-09-30 00:30

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.

Requirements

What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/main.cf:

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/main.cf:

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist list@example.org and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to LISTNAME-sendkey@example.com. You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf: - email: francois@fmarier.org key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import
Categories: FLOSS Project Planets

Dirk Eddelbuettel: Rcpp 0.11.3

Mon, 2014-09-29 20:39

A new release 0.11.3 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package has been uploaded too.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 273 packages on CRAN depend on Rcpp for making analyses go faster and further.

This release brings a fairly large number of continued enhancements, fixes and polishing to Rcpp. These were provided by a total of seven different contributors---which is a new record as well.

See below for a detailed list of changes extracted from the NEWS file, but some highlights included in this release are

  • Several API cleanups, polishes and a pre-announced code removal
  • New InternalFunction interface, and new Timer functionality.
  • More robust functionality of Rcpp Attributes as well as a new dryRun option.
  • The Rcpp FAQ was updated, as was the main Description: in the DESCRIPTION file.
  • Rcpp.package.skeleton() can now deploy functionality from pkgKitten to create Rcpp packages that purr.

One sore point, however, is that we missed that packages using Rcpp Modules appear to require a rebuild. We are sorry for the inconvenience; this has highlighted a shortcoming in our fairly robust and extensive tests. While we test our packages against all known CRAN dependents, such tests check for the ability to compile and run freshly and not whether previously built packages still run. We intend to augment our testing in this direction to avoid a repeat occurrence of such a misfeature.

Changes in Rcpp version 0.11.3 (2014-09-27)
  • Changes in Rcpp API:

    • The deprecation of RCPP_FUNCTION_* which was announced with release 0.10.5 last year is proceeding as planned, and the file macros/preprocessor_generated.h has been removed.

    • Timer no longer records time between steps, but times from the origin. It also gains a get_timers(int) methods that creates a vector of Timer that have the same origin. This is modelled on the Rcpp11 implementation and is more useful for situations where we use timers in several threads. Timer also gains a constructor taking a nanotime_t to use as its origin, and a origin method. This can be useful for situations where the number of threads is not known in advance but we still want to track what goes on in each thread.

    • A cast to bool was removed in the vector proxy code as inconsistent behaviour between clang and g++ compilations was noticed.

    • A missing update(SEXP) method was added thanks to pull request by Omar Andres Zapata Mesa.

    • A proxy for DimNames was added.

    • A no_init option was added for Matrices and Vectors.

    • The InternalFunction class was updated to work with std::function (provided a suitable C++11 compiler is available) via a pull request by Christian Authmann.

    • A new_env() function was added to Environment.h

    • The return value of range eraser for Vectors was fixed in a pull request by Yixuan Qiu.

  • Changes in Rcpp Sugar:

    • In ifelse(), the returned NA type was corrected for operator[].

  • Changes in Rcpp Attributes:

    • Include LinkingTo in DESCRIPTION fields scanned to confirm that C++ dependencies are referenced by package.

    • Add dryRun parameter to sourceCpp.

    • Corrected issue with relative path and R chunk use for sourceCpp.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was updated with respect to OS X issues.

    • A new entry in the Rcpp-FAQ clarifies the use of licenses.

    • Vignettes build results no longer copied to /tmp to please CRAN.

    • The Description in DESCRIPTION has been shortened.

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function will now use pkgKitten package, if available, to create a package which passes R CMD check without warnings. A new Suggests: has been added for pkgKitten.

    • The modules=TRUE case for Rcpp.package.skeleton() has been improved and now runs without complaints from R CMD check as well.

  • Changes in Rcpp unit test functions:

    • Functions from the RUnit package are now prefixed with RUnit::

    • The testRcppModule and testRcppClass sample packages now pass R CMD check --as-cran cleanly with NOTES or WARNINGS

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Marco d'Itri: CVE-2014-6271 fix for Debian woody, sarge, etch and lenny

Mon, 2014-09-29 03:51

Very old Debian releases like woody (3.0), sarge (3.1), etch (4.0) and lenny (5.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug) and Florian Weimer's patch which restricts the parsing of shell functions to specially named variables:

http://ftp.linux.it/pub/People/md/bash/

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Categories: FLOSS Project Planets

Jonathan Dowland: Letter to Starburst magazine

Mon, 2014-09-29 03:51

I recently read a few issues of Starburst magazine which is good fun, but a brief mention of the Man Booker prize in issue 404 stoked the fires of the age old SF-versus-mainstream argument, so I wrote the following:

Dear Starburst,

I found it perplexing that, in "Brave New Words", issue 404, whilst covering the Man-Booker shortlist, Ed Fortune tried to simultaneously argue that genre readers "read broadly" yet only Howard Jacobson's novel would be of passable interest. Asides from the obvious logical contradiction he is sadly overlooking David Mitchell's critically lauded and undisputably SF&F novel "The Bone Clocks", which it turned out was also overlooked by the short-listers. Still, Jacobson's novel made it, meaning SF&F represents 16% of the shortlist. Not too bad I'd say.

All the best & keep up the good work!

As it happens I'm currently struggling through "J". I'm at around the half-way mark.

Categories: FLOSS Project Planets

Ean Schuessler: RoboJuggy at JavaOne

Sun, 2014-09-28 18:14

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson.  I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David’s friend and collaborators at  OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results.

Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden’s RSMedia and various versions of David’s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk.

Hunting and gathering

I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the “netinst” Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don’t need. I’d recommend anyone interested I’m duplicating our work to stay their journey there:

Raspbian UA Net Installer

Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions:

Setting up ROS Hydro on the Raspberry Pi

Building from source means that all your install ends up being “isolated” (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:

sudo dd bs=4M if=/dev/sdb | gzip > /home/your_username/image`date +%d%m%y`.gz Getting Java in the mix

Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn’t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM.

Java 8 JDK for ARM

At this point you are ready to start installing the ROS Java packages. I’m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the “install from source link” in the page to see the right stuff:

Installing ROS Java on Hydro

Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won’t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently “chain” workspaces in ROS but I didn’t understand this well enough to get it working so what I did was this:

> mkdir -p ~/rosjava > wstool init -j4 ~/rosjava/src https://raw.github.com/rosjava/rosjava/hydro/rosjava.rosinstall > source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages. > rosdep update > rosdep install --from-paths src -i -y

and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.

> catkin_make_isolated --install

Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same “build from source” installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver.

Putting together the pieces

Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box:

Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores.

Complete pan and tilt rig with motors for $25

A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won’t regret buying it.

Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at http://rsmediadevkit.sourceforge.net.

That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don’t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later.  Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly:

Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes.

Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren’t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here:

Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I’m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress.

As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet’s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good!

I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not “built in”. I had tried to follow the provided instructions but was not (and still have not) been able to get that working.

Using “unofficial” messages with ROS Java

I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions:

Connecting to the RSMedia Linux Console Port

After some sweaty time bent over a magnifying glass I had success:

I had previously purchased the USB-TTL232 accessory described in the article from Dallas’ awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner.

It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn’t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful  resource which let me get all kinds of body movements going:

A collection of useful commands for the RSMedia

At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry’s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going.  Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work.

The last day was spent doing things that I wouldn’t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully.

I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn’t have made it out the door.

Welcome to San Francisco

THIS ARTICLE IS STILL BEING WRITTEN

 

Categories: FLOSS Project Planets

Jonathan Dowland: Puppet and filesystem mounts

Sun, 2014-09-28 13:50

Well, not long after writing my last post I've found some time to write up some of my puppet adventures, sooner than I imagined...

Outside work, I sys-admin a VPS instance that is shared by a few friends. We recently embarked in a project to migrate to a different VPS instance and I took the opportunity to revisit how we managed home directories.

I've got all the disk space allocated to the VM set up as LVM physical volumes. This has proven very useful for later expansion: we can do it all live. Each user on the VM may have one or more UNIX accounts that they use. Therefore, in the old scheme, for the jon user, we mounted an allocation of disk space at /home/jons, put the account home directories under it at e.g. /home/jons/jon, symlinked /home/jon -> /home/jons/jon for brevity, and set that as the home field in the passwd entry. This worked surprisingly well, but I was always uncomfortable with having a symlink in the home path (and some software was, too.)

For the new machine, I decided to try bind mounts. Short story: they just work. However, the mtab (and df output) can look a little cluttered, and mount order becomes quite important. To manage the set-up, I wrote a few puppet snippets. First, a convenience definition to make the actual bind-mounts a little less verbose.

define bindmount($device) { mount { $name: device => $device, ensure => mounted, fstype => 'none', options => 'bind', dump => 0, pass => 2, require => File[$device], } }

Once that was in place, we then needed to ensure that the directories to which the LV were to be mounted, and to where the user's home would be bind-mounted, actually exist; we also need to mount the underlying LV and set up the bind mount. The dependency chain is actually a graph, but with the majority of dependencies quite linear:

define bindmounthome() { file { ["/home/${name}s", "/home/${name}"]: ensure => directory, } -> # depended upon by mount { "/home/${name}s": device => "LABEL=${name}", ensure => mounted, fstype => 'ext4', options => 'defaults', dump => 0, pass => 2, } -> # depended upon by bindmount { "/home/${name}": device => "/home/${name}s/${name}", } file { "/home/${name}s/${name}": ensure => directory, owner => $name, group => $name, mode => 0701, # 0701/drwx-----x require => [User[$name], Group[$name], Mount["/home/${name}s"]], } }

That covers the underlying mounts and the "primary" accounts. However, the point of this exercise was to support the secondary accounts for each user. There's a bit of repetition here, and with some refactoring both this and the preceding bindmounthome definition could be a bit shorter, but I'm not sure whether that would be at the expense of legibility:

define seconduser($parent) { file { "/home/${name}": ensure => directory, } -> # depended upon by bindmount { "/home/${name}": device => "/home/${parent}s/${name}", } file { "/home/${parent}s/${name}": ensure => directory, owner => $name, group => $name, mode => 0701, # 0701/drwx-----x require => [User[$name], Group[$name], Mount["/home/${parent}s"]], } }

I had to re-read the above a couple of times just now to convince myself that I hadn't missed the dependencies between the mount invocations towards the bottom, but they're there: so, puppet will always run the mount for /home/jons before /home/jons/jon. Since puppet is writing to the fstab, this means that the ordering is correct and a sequential start-up will work.

If you want anything cleverer than serialised, one-at-a-time mounting at boot, I think one would have to use something other than trusty-old fstab for the job. I'm planning to look at Systemd's mount unit type, but there's no rush as this particular host is still running sysvinit for the time being.

Categories: FLOSS Project Planets

Clint Adams: Banana Pi is a real thing

Sun, 2014-09-28 13:13

Now that I've almost caught up with life after an extended stint on the West Coast, it's time to play.

Like Gunnar, I acquired a Banana Pi courtesy of LeMaker.

My GuruPlug (courtesy me) and my Excito B3 (courtesy the lovely people at Tor) are giving me a bit of trouble in different ways, so my intent is to decommission and give away the GuruPlug and Excito B3, leaving my DreamPlug and the Banana Pi to provide the services currently performed by the GuruPlug, Excito B3, and DreamPlug.

The Banana Pi is presently running Bananian on a 32G SDHC (Class 10) card. This is close to wheezy, and appears to have a mostly-sane default configuration, but I am not going to trust some random software downloaded off the Internet on my home network, so I need to be able to run Debian on it instead.

My preliminary belief is that the two main obstacles are Linux and U-Boot. Bananian 14.09 comes with Linux 3.4.90+ #1 SMP PREEMPT Fri Sep 12 18:13:45 CEST 2014 armv7l GNU/Linux, whatever that is, and U-Boot SPL 2014.04-10694-g2ae8b32 (Sep 03 2014 - 20:53:14). I don't yet know what the status of mainline/Debian support is.

Someone gave me a wooden cigar box to use as a case, which is not working out quite as hoped. I also found that my hack to power a 3.5" SATA drive does not work, so I'll either need to hammer on that some more or resolve to use a 2.5" drive instead.

memory:

Mem: 993700 36632 957068 0 2248 11136 -/+ buffers/cache: 23248 970452 Swap: 524284 1336 522948

cpu:

Processor : ARMv7 Processor rev 4 (v7l) processor : 0 BogoMIPS : 1192.96 processor : 1 BogoMIPS : 1197.05 Features : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt CPU implementer : 0x41 CPU architecture: 7 CPU variant : 0x0 CPU part : 0xc07 CPU revision : 4 Hardware : sun7i Revision : 0000 Serial : 03c32de75055484880485278165166c9
Categories: FLOSS Project Planets

Jonathan Dowland: What have I been up to?

Sun, 2014-09-28 12:59

It's been a little while since I've written about what I've been up to. The truth is I've been busy with moving house - and I'll write a bit more about that at another time. But asides from that there have been some bits and bobs.

I use a little tool called archivemail to tidy up old listmail (my policy is to retain 30 days of listmail for most lists). If I unsubscribe to a list, then eventually I end up with an empty mail folder corresponding to that list. I decided it would be nice to extend archivemail to delete mailboxes if, after the archiving has taken place, the mailbox is empty. Doing this properly means adding delete routines to Python's "mailbox" library, which is part of the Python standard library. I've therefore started work on a patch for Python.

Since this is an enhancement, Python would only accept a patch for Python 3. Therefore, eventually, I would also have to port archivemail from Python 2 to 3. "archivemail" is basically abandonware at the moment, and the principal Debian maintainer is MIA. There was a release critical bug filed against it, so I joined the Debian Python team to co-maintain archivemail in Debian. I've worked around the RC bug but a proper fix is still to come.

In other Debian news, I've been mostly quiet. A small patch for squishyball to get it to build on Hurd, and a temporary fix patch for lhasa to get it to build on the build daemons for all architectures (problems with the test suite). All three of lhasa, squishyball and archivemail need a little bit of love to get them into shape before the jessie freeze.

I've had plans to write up some of the more interesting technical things I've been up to at work, but with the huge successes of the School we've been so busy I haven't had time. Hopefully you can soon look forward to some of our further adventures with puppet, including evaluating Shibboleth modules, some stuff about handling user directories, bind mounts and LVM volumes and actually publishing some of our more useful internal modules; I hope we will also (soon) have some useful data to go with our experiments with Linux LXC containers versus KVM-powered virtual machines in some of our use-cases. I've also got a few bits and pieces on Systemd to write up.

Categories: FLOSS Project Planets

Benjamin Mako Hill: Community Data Science Workshops Post-Mortem

Sun, 2014-09-28 00:02

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

Categories: FLOSS Project Planets

DebConf team: Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

Sat, 2014-09-27 14:40

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive.

Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We’ve gathered a few articles for your reading pleasure:

Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive.

Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered.

Noah Meyerhans’ adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors.

Hideki Yamane sent a gentle reminder for English speakers to speak more slowly.

Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator.

Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13.

Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year.

Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file.

Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor.

Elizabeth Krumbach Joseph’s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance.

Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game.

François Marier’s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian.

Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing.

Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen.

Jérémy Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on irc.debian.org.

Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations.

Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable.

Jonathan McDowell’s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone’s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings.

Steve McIntyre posted a Summary from the “State of the ARM” BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM’s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on irc.debian.org and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK.

Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf.

Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering metrics.debian.net. Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian.

Harlan Lieberman-Berg’s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor.

Ana Guerrero López reflected on Ten years contributing to Debian.

Categories: FLOSS Project Planets