Planet Debian

Syndicate content
Planet Debian - http://planet.debian.org/
Updated: 17 hours 2 min ago

Andrew Cater

Fri, 2014-11-07 07:09
At mini-Debconf Cambridge:

Much unintentional chaos and hilarity and world class problem solving yesterday.

A routine upgrade from Wheezy - Jessie died horribly on my laptop when UEFI variable space filled, leaving No Operating System on screen.

Cue much running around: Chris Boot, Colin Walters, Steve dug around, booted the system usng rescue CD and so on. Lots more digging, including helpful posts by mjg59 - a BIOS update may solve the problem.

Flashing BIOS did clear the variables and variable space and it all worked perfectly thereafter. This had the potential for turning the laptop into a brick under UEFI (but still working under legacy boot).

As it is, it all worked perfectly - but where else would you get _the_ Grub maintainer, 2 x UEFI experts and a broken laptop all in the same room ?

If it hadn't happened yesterday, it would have happened at home and I'd have been left with nothing. As it is, we all learnt/remembered stuff and had a useful time fixing it.
Categories: FLOSS Project Planets

Andrew Cater

Fri, 2014-11-07 06:55
Here at the Debian mini-conf in Cambridge at ARM.

16 developers sat in near-total silence, the only noise keyboards and a server sitting next to me.

Some of them I've seen only on video presentations: one I first knew 20 years ago, one wrote all the HOWTO's I knew a couple of years before that - and was the second Linux user ever.

The release team is in another room behind me: Jessie froze the night before last - whatever else will be said/done, we're on the path to release.

It feels very strange and comforting to see Debian in the round: to be able to talk to someone you might argue with in email and see a real person.

And HUGE thanks to all at ARM for time, effort, chasing around and to Sledge and Jo  Jo most of all for being stuck on a front desk waiting for people :)


Categories: FLOSS Project Planets

Steinar H. Gunderson: xkcd

Fri, 2014-11-07 06:04

Achievement unlocked: xkcd today has a comic about a product where I was part of the initial launch team. I suppose my work here is done.

Categories: FLOSS Project Planets

Riku Voipio: Adventures in setting up local lava service

Fri, 2014-11-07 04:03
Linaro uses LAVA as a tool to test variety of devices. So far I had not installed it myself, mostly due to assuming it to be enermously complex to set up. But thanks to Neil Williams work on packaging, installation has got a lot easier. Follow the Official Install Doc and Official install to debian Doc, roughly looking like:

1. Install Jessie into kvm


kvm -m 2048 -drive file=lava2.img,if=virtio -cdrom debian-testing-amd64-netinst.iso
2. Install lava-server
apt-get update; apt-get install -y postgresql nfs-kernel-server apache2
apt-get install lava-server
# answer debconf questions
a2dissite 000-default && a2ensite lava-server.conf
service apache2 reload
lava-server manage createsuperuser --username default --email=foo.bar@example.com
$EDITOR /etc/lava-dispatcher/lava-dispatcher.conf # make sure LAVA_SERVER_IP is right
That's the generic setup. Now you can point your browser to the IP address of the kvm machine, and log in with the default user and the password you made.

3 ... 1000 Each LAVA instance is site customized for the boards, network, serial ports, etc. In this example, I now add a single arndale board.


cp /usr/lib/python2.7/dist-packages/lava_dispatcher/default-config/lava-dispatcher/device-types/arndale.conf /etc/lava-dispatcher/device-types/
sudo /usr/share/lava-server/add_device.py -s arndale arndale-01 -t 7001
This generates us a almost usable config for the arndale. For site specifics I have usb-to-serial. Outside kvm, I provide access to serial ports using the following ser2net config:
7001:telnet:0:/dev/ttyUSB0:115200 8DATABITS NONE 1STOPBIT
7002:telnet:0:/dev/ttyUSB1:115200 8DATABITS NONE 1STOPBIT
TODO: make ser2net not run as root and ensure usb2serial devices always get same name..

For automatic power reset, I wanted something cheap, yet something that wouldn't require much soldering (I'm not a real embedded engineer.. I prefer software side ;) . Discussed with Hector, who hinted about prebuilt relay boxes. Chose one from Ebay, a kmtronic 8-port USB Relay. So now I have this cute boxed nonsense hack.

The USB relay is driven with a short script, hard-reset-1


stty -F /dev/ttyACM0 9600
echo -e '\xFF\x01\x00' > /dev/ttyACM0
sleep 1
echo -e '\xFF\x01\x01' > /dev/ttyACM0
Sidenote: If you don't have or want automated power relay for lava, you can always replace this this script with something along "mpg123 puny_human_press_the_power_button_now.mp3"

Both the serial port and reset script are on server with dns name aimless. So we take the /etc/lava-dispatcher/devices/arndale-01.conf that add_device.py created and make it look like:


device_type = arndale
hostname = arndale-01
connection_command = telnet aimless 7001
hard_reset_command = slogin lava@aimless -i /etc/lava-dispatcher/id_rsa /home/lava/hard-reset-1
Since in my case I'm only going to test with tftp/nfs boot, the arndale board needs only to be setup to have a u-boot bootloader ready on power-on.

Now everything is ready for a test job. I have a locally built kernel and device tree, and I export the directory using the httpd available by default in debian.. Python!


cd out/
python -m SimpleHTTPServer
Go to the lava web server, select api->tokens and create a new token. Next we add the token and use it to submit a job
$ sudo apt-get install lava-tool
$ lava-tool auth-add http://default@lava-server/RPC2/
$ lava-tool submit-job http://default@lava-server/RPC2/ lava_test.json
submitted as job id: 1
$
The first job should now be visible in the lava web frontend, in the scheduler -> jobs part. If everything goes fine, the relay will click in a moment and the job will finish in a few minutes.
Categories: FLOSS Project Planets

Johannes Schauer: automatically suspending cpu hungry applications

Fri, 2014-11-07 03:51

TLDR: Using the awesome window manager: how to automatically send SIGSTOP and SIGCONT to application windows when they get unfocused or focused, respectively, to let the application not waste CPU cycles when not in use.

I don't require any fancy looking GUI, so my desktop runs no full-blown desktop environment like Gnome or KDE but instead only awesome as a light-weight window manager. Usually, the only application windows I have open are rxvt-unicode as my terminal emulator and firefox/iceweasel with the pentadactyl extension as my browser. Thus, I would expect that CPU usage of my idle system would be pretty much zero but instead firefox decides to constantly eat 10-15%. Probably to update some GIF animations or JavaScript (or nowadays even HTML5 video animations). But I don't need it to do that when I'm not currently looking at my browser window. Disabling all JavaScript is no option because some websites that I need for uni or work are just completely broken without JavaScript, so I have to enable it for those websites.

Solution: send SIGSTOP when my firefox window looses focus and send SIGCONT once it gains focus again.

The following addition to my /etc/xdg/awesome/rc.lua does the trick:

local capi = { timer = timer }
client.add_signal("focus", function(c)
if c.class == "Iceweasel" then
awful.util.spawn("kill -CONT " .. c.pid)
end
end)
client.add_signal("unfocus", function(c)
if c.class == "Iceweasel" then
local timer_stop = capi.timer { timeout = 10 }
local send_sigstop = function ()
timer_stop:stop()
if client.focus.pid ~= c.pid then
awful.util.spawn("kill -STOP " .. c.pid)
end
end
timer_stop:add_signal("timeout", send_sigstop)
timer_stop:start()
end
end)

Since I'm running Debian, the class is "Iceweasel" and not "Firefox". When the window gains focus, a SIGCONT is sent immediately. I'm executing kill because I don't know how to send UNIX signals from lua directly.

When the window looses focus, then the SIGSTOP signal is only sent after a 10 second timeout. This is done for several reasons:

  • I don't want firefox to stop in cases where I'm just quickly switching back and forth between it and other application windows
  • When firefox starts, it doesn't have a window for a short time. So without a timeout, the process would start but immediately get stopped as there is no window to have a focus.
  • when using the X paste buffer, then the application behind the source window must not be stopped when pasting content from it. I assume that I will not spend more than 10 seconds between marking a string in firefox and pasting it into another window

With this change, when I now open htop, the process consuming most CPU resources is htop itself. Success!

Another cool advantage is, that firefox can now be moved completely into swap space in case I run otherwise memory hungry applications without ever requiring any memory from swap until I really use it again.

I haven't encountered any disadvantages of this setup yet. If 10 seconds prove to be too short to copy and paste I can easily extend this delay. Even clicking on links in my terminal works flawlessly - the new tab will just only load once firefox gets focused again.

EDIT: thanks to Helmut Grohne for suggesting to compare the pid instead of the raw client instance to prevent misbehaviour when firefox opens additional windows like the preferences dialog.

Categories: FLOSS Project Planets

Russ Allbery: On tolerating personal abuse

Fri, 2014-11-07 00:04

While I don't consider myself part of the science fiction community directly (my con-going days are probably over), I do follow it across a wide variety of blogs. There are a lot of hard conversations and considerable soul-searching going on right now concerning an on-line commentator in that community who had been nasty and vicious to people, but originally for reasons that many people thought were good causes. (I had been one of those people. It's always very, very tempting to appreciate a good vitriolic rant from someone who shares your world view. And very easy to lose track of the people those rants are aimed at, or the excesses to which those rants go.)

I'm not going to go into the details of the SF community issues here, since I have no context other than what I've read, and it's something to work out within that community. But I've been taking it as a useful reminder that abusive behavior is not acceptable, even if it comes from people who are arguably on your side.

Anger is important. Anger is often how the world changes. But anger and abuse are not the same thing.

I wrote something a little bit ago in a different context. Given that reminder, and given some of the arguments that are going on in the free software community as well, it seemed like a good idea to post a somewhat edited version of it in a more public place:

None of us should be willing to continue to participate in a project in which we're expected to tolerate being abused and attacked, and all consequences of that abuse are our problem to deal with. It is simply not fun, and not motivating, and not interesting, and does not lead to us doing good work.

I say this from lots of hard-won personal experience. I was deeply involved in Usenet governance for many years. I have made all the same arguments that I see today in favor of "blunt" speech, vitriol, and attacks. I have been a passionate advocate for the "free speech" approach. I have told other people to just filter and use killfiles. I have said that words aren't worth getting worked up about, and it's easy to ignore people.

I was wrong.

It took a long time for me to figure out that I was wrong, not just for other people, but for myself as well. It took me much longer to walk away from Usenet governance for good because the environment was too toxic. It remains one of the best decisions I have ever made in my life. It was the best bit of self-care that I ever did.

I learned from that experience, and earlier this year, I walked away from a job for related reasons. I enjoyed the work, the job was much easier than the job that I have now, and I had a lot of time to work on free software and on Debian, but the emotional environment was toxic. (It was not as openly abusive, but it was an environment of disrespect, hierarchical dominance games, fear, blame, and emotional blackmail.) As a result, I've had to shift priorities considerably, but I'm a much happier person. It's worth having less time for things that I was previously enjoying to not to have to deal with an emotionally negative and confrontational environment. Life is too short, and I have the luxury of having choices.

Both of those incidents taught me that it's very easy for me to leave that sort of situation to fester for too long, and that I underestimate how much of an improvement it is for my quality of life to walk away from abusive and negative emotional situations. I am belatedly learning how to be more ready and willing to do this.

I very much understand the people who are concerned with ensuring there is space for strongly-worded opinions and heartfelt anger.

But we have to draw a line, and that line needs to rule out emotional abuse of people in our community even in the name of passionate polemics about something that's important. We have to enforce that line, and if that means ejecting people from our community, that's what we have to do. Because, if we don't, we're also ejecting people from our community: the quiet people, the people who are just trying to get work done, or the people who have had past experience with abusive environments and understand the need to bail when an environment starts going in that direction.

I'm not going to put up with the sort of environment I put up with when doing Big Eight newsgroup creation. I'm not saying this as some sort of threat -- I'm saying this to try to be very clear that not standing up for the members of our community and not supporting each other against abuse and emotional attacks also has consequences, and will destroy that community for a lot of us. I'm saying that I am not interested in living in an environment of fear and blame. And one should never underestimate the human power of giving people space and community in which they can be comfortable, relaxed, and truly happy.

Walking the line between this stance and the "tone argument," in which people who are being abused or disenfranchised are attacked for being angry, is very difficult. There are some helpful rules of thumb, such as distinguishing between punching up and punching down, but those rules of thumb can fail or be distorted, as the SF community is learning. It's important that people be able to express anger. It's also important that people be able to name names and identify specific behaviors that they believe are worthy of that anger. But when that anger escalates into attacks, there is a real danger that passionate righteousness turns into passionate abusiveness, a danger of losing the sense of community in our own sense of righteousness. And that's not something we can or should accept.

It's going to take a lot of hard work, a lot of open conversation, and a lot of empathy and care to find that line. There are many things in the world right now that should provoke anger, and with that anger comes power for good. But with that anger can also come a destructive blindness. The anger I want is the anger that drives us to change the world together, the anger that leads to confronting others with a reflection of their own better natures and challenging them to become better, more compassionate people. The anger that leads a man to feed the homeless in the true meaning of civil disobedience. Not the anger that crushes our enemies.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RcppRedis 0.1.2

Thu, 2014-11-06 21:34

A new release of RcppRedis is now on CRAN. It contains additional commands for hashes and sets, all contributed by John Laing and Whit Armstrong.

Changes in version 0.1.2 (2014-11-06)
  • New commands execv, hset, hget, sadd, srem, and smembers contributed by John Laing and Whit Armstrong over several pull requests.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppRedis page page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Daniel Kahn Gillmor: GnuPG 2.1.0 in debian experimental

Thu, 2014-11-06 18:27
Today, i uploaded GnuPG 2.1.0 into debian's experimental suite. It's built for amd64 and i386 and powerpc already. You can monitor its progress on the buildds to see when it's available for your architecture.

Changes

GnuPG 2.1 offers many new and interesting features, but one of the most important changes is the introduction of elliptic curve crypto (ECC). While GnuPG 2.1 discourages the creation of ECC keys by default, it's important that we have the ability to verify ECC signatures and to encrypt to ECC keys if other people are using this tech. It seems likely, for example, that Google's End-To-End Chrome OpenPGP extension will use ECC. GnuPG users who don't have this capability available won't be able to communicate with End-To-End users.

There are many other architectural changes, including a move to more daemonized interactions with the outside world, including using dirmngr to talk to the keyservers, and relying more heavily on gpg-agent for secret key access. The gpg-agent change is a welcome one -- the agent now holds the secret key material entirely and never releases it -- as of 2.1 gpg2 never has any asymmetric secret key material in its process space at all.

One other nice change for those of us with large keyrings is the new keybox format for public key material. This provides much faster indexed access to the public keyring.

I've been using GnuPG 2.1.0 betas regularly for the last month, and i think that for the most part, they're ready for regular use.

Timing for debian

The timing between the debian freeze and the GnuPG upstream is unfortunate, but i don't think i'm prepared to push for this as a jessie transition yet, without more backup. I'm talking to other members of the GnuPG packaging team to see if they think this is worth even bringing to the attention of the release team, but i'm not pursuing it at the moment.

If you really want to see this in debian jessie, please install the experimental package and let me know how it works for you.

Long term migration concerns

GnuPG upstream is now maintaining three branches concurrently: modern (2.1.x), stable (2.0.x), and classic (1.4.x). I think this is stretches the GnuPG upstream development team too thin, and we should do what we can to help them transition to supporting fewer releases concurrently.

In the long-term, I'd ultimately like to see gnupg 2.1.x to replace all use of gpg 1.4.x and gpg 2.0.x in debian, but unlikely to to happen right now.

In particular, the following two bugs make it impossible to use my current, common monkeysphere workflow:

And GnuPG 2.1.0 drops support for the older, known-weak OpenPGPv3 key formats. This is an important step for simplification, but there are a few people who probably still need to use v3 keys for obscure/janky reasons, or have data encrypted to a v3 key that they need to be able to decrypt. Those people will want to have GnuPG 1.4 around.

Call for testing

Anyway, if you use debian testing or unstable, and you are interested in these features, i invite you to install `gnupg2` and its friends from experimental. If you want to be sensibly conservative, i recommend backing up `~/.gnupg` before trying to use it:

cp -aT .gnupg .gnupg.bak sudo apt install -t experimental gnupg2 gnupg-agent dirmngr gpgsm gpgv2 scdaemon If you find issues, please file them via the debian BTS as usual. I (or other members of the pkg-gnupg team) will help you triage them to upstream as needed.

Tags: ecc, experimental, gnupg

Categories: FLOSS Project Planets

Junichi Uekawa: I was looking for a way to lock and reboot from gnome.

Thu, 2014-11-06 16:43
I was looking for a way to lock and reboot from gnome. And found out that those services are provided by GDM that lightdm didn't work.

Categories: FLOSS Project Planets

Steve Kemp: Planning how to configure my next desktop

Thu, 2014-11-06 16:26

I recently setup a bunch of IPv6-only accessible hosts, which I mentioned in my previous blog post.

In the end I got them talking to the IPv4/legacy world via the installation of an OpenVPN server - they connect over IPv6 get a private 10.0.0.0/24 IP address, and that is masqueraded via the OpenVPN-gateway.

But the other thing I've been planning recently is how to configure my next desktop system. I generally do all development, surfing, etc, on one desktop system. I use virtual desktops to organize things, and I have a simple scripting utility to juggle windows around into the correct virtual-desktop as they're launched.

Planning a replacement desktop means installing a fresh desktop, then getting all the software working again. These days I'd probably use docker images to do development within, along with a few virtual machines (such as the pbuilder host I used to release all my Debian packages).

But there are still niggles. I'd like to keep the base system lean, with few packages, but you can't run xine remotely, similarly I need mpd/sonata for listening to music, emacs for local stuff, etc, etc.

In short there is always the tendency to install yet-another package, service, or application on the desktop, which makes migration a pain.

I'm not sure I could easily avoid that, but it is worth thinking about. I guess I could configure a puppet/slaughter/cfengine host and use that to install the desktop - but I've always done desktops "manually" and servers "magically" so it's a bit of a change in thinking.

Categories: FLOSS Project Planets

Michal Čihař: Weblate 2.0

Thu, 2014-11-06 08:00

Weblate 2.0 has been released today. It comes with lot of improvements in backend and completely new user interface.

Full list of changes for 2.0:

  • New responsive UI using Bootstrap.
  • Rewritten VCS backend.
  • Documentation improvements.
  • Added whiteboard for site wide messages.
  • Configurable strings priority.
  • Added support for JSON file format.
  • Fixed generating mo files in certain cases.
  • Added support for GitLab notifications.
  • Added support for disabling translation suggestions.
  • Django 1.7 support.
  • ACL projects now have user management.
  • Extended search possibilites.
  • Give more hints to translators about plurals.
  • Fixed Git repository locking.
  • Compatibility with older Git versions.
  • Improved ACL support.
  • Added buttons for per language quotes and other special chars.
  • Support for exporting stats as JSONP.

You can find more information about Weblate on http://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Ready to run appliances will be soon available in SUSE Studio Gallery.

Weblate is also being used https://hosted.weblate.org/ as official translating service for phpMyAdmin, Gammu, Weblate itself and others.

If you are free software project which would like to use Weblate, I'm happy to help you with set up or even host Weblate for you.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far!

Filed under: English phpMyAdmin SUSE Weblate | 0 comments | Flattr this!

Categories: FLOSS Project Planets

Matthias Klumpp: The state of AppStream/GNOME-Software in Debian Jessie

Thu, 2014-11-06 05:34

… or “Why do I not see any update notifications on my brand-new Debian Jessie installation??”

This is a short explanation of the status quo, and also explains the “no update notifications” issue in a slightly more detailed way, since I am already getting bug reports for that.

As you might know, GNOME provides GNOME-Software for installation of applications via PackageKit. In order to work properly, GNOME-Software needs AppStream metadata, which is not yet available in Debian. There was a GSoC student working on the necessary code for that, but the code is not yet ready and doesn’t produce good results yet. Therefore, I postponed AppStream integration to Jessie+1, with an option to include some metadata for GNOME and KDE to use via a normal .deb package.

Then, GNOME was updated to 3.14. GNOME 3.14 moved lots of stuff into GNOME-Software, including the support for update-notifications (which have been in g-s-d before). GNOME-Software is also the only thing which can edit the application groups in GNOME-Shell, at least currently.

So obviously, there was no a much stronger motivation to support GNOME-Software in Jessie. The appstream-glib library, which GNOME-Software uses exclusively to read AppStream metadata, didn’t support the DEP-11 metadata format which Debian uses in place of the original AppSTream XML for a while, but does so in it’s current development branch. So that component had to be packaged first. Later, GNOME-Software was uploaded to the archive as well, but still lacked the required metadata. That data was provided by me as a .deb package later, locally generated using the current code by my SoC student (the data isn’t great, but better than nothing). So far with the good news.

But there are multiple issues at time. First of all, the appstream-data package didn’t pass NEW so far, due to it’s complex copyright situation (nothing we can’t resolve, since app-install-data, which appstream-data would replace, is in Debian as well). Also, GNOME-Software is exclusively using offline-updates (more information also on [1] and [2]) at time. This isn’t always working at the moment, since I haven’t had the time to test it properly – and I didn’t expect it to be used in Debian Jessie as well[3].

Furthermore, the offline-updates feature requires systemd (which isn’t an issue in itself, I am quite fine with that, but people not using it will get unexpected results, unless someone does the work to implement offline-updates with sysvinit).

Since we are in freeze at time, and obviously this stuff is not ready yet, GNOME is currently without update notifications and without a way to change the shell application groups.

So, how can we fix this? One way would of course be to patch notification support back into g-s-d, if the new layout there allows doing that. But that would not give us the other features GNOME-Software provides, like application-group-editing.

Implementing that differently and patching it to make it work would be more or at least the same amount of work like making GNOME-Software run properly. I therefore prefer getting GNOME-Software to run, at least with basic functionality. That would likely mean hiding things like the offline-update functionality, and using online-updates with GNOME-PackageKit instead.

Obviously, this approach has it’s own issues, like doing most of the work post-freeze, which kind of defeats the purpose of the freeze and would need some close coordination with the release-team.

So, this is the status quo at time. It is kind of unfortunate that GNOME moved crucial functionality into a new component which requires additional integration work by the distributors so quickly, but that’s something which isn’t worth to talk about. We need a way forward to bring update-notifications back, and there is currently work going on to do that. For all Debian users: Please be patient while we resolve the situation, and sorry for the inconvenience. For all developers: If you would like to help, please contact me or Laurent Bigonville, there are some tasks which could use some help.

As a small remark: If you are using KDE, you are lucky – Apper provides the notification support like it always did, and thanks to improvements in aptcc and PackageKit, it even is a bit faster now. For the Xfce and <other_desktop> people, you need to check if your desktop provides integration with PackageKit for update-checking. At least Xfce doesn’t, but after GNOME-PackageKit removed support for it (which was moved to gnome-settings-daemon and now to GNOME-Software) nobody stepped up to implement it yet (so if you want to do it – it’s not super-complicated, but knowledge of C and GTK+ is needed).

—-

[3]: It looks like dpkg tries to ask a debconf question for some reason, or an external tool like apt-listchanges is interfering with the process, which must run completely unsupervised. There is some debugging needed to resolve these Debian-specific issues.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: A Software Carpentry workshop at Northwestern

Wed, 2014-11-05 21:44

On Friday October 31, 2014, and Saturday November 1, 2014, around thirty-five graduate students and faculty members attended a Software Carpentry workshop. Attendees came primarily from the Economics department and the Kellogg School of Management, which was also the host and sponsor providing an excellent venue with the Allen Center on the (main) Evanston campus of Northwestern University.

The focus of the two-day workshop was an introduction, and practical initiation, to working effectively at the shell, getting introduced and familiar with the git revision control system, as well as a thorough introduction to working and programming in R---from the basics all the way to advanced graphing as well as creating reproducible research documents.

The idea of the workshop had come out of discussion during our R/Finance 2014 conference. Bob McDonald of Northwestern, one of this year's keynote speakers, and I were discussing various topic related to open source and good programming practice --- as well as the lack of a thorough introduction to either for graduate students and researcher. And once I mentioned and explained Software Carpentry, Bob was rather intrigued. And just a few months later we were hosting a workshop (along with outstanding support from Jackie Milhans from Research Computing at Northwestern).

We were extremely fortunate in that Karthik Ram and Ramnath Vaidyanathan were able to come to Chicago and act as lead instructors, giving me an opportunity to get my feet wet. The workshop began with a session on shell and automation, which was followed by three session focusing on R: a core introduction, a session focused on function, and to end the day, a session on the split-apply-combine approach to data transformation and analysis.

The second day started with two concentrated session on git and the git workflow. In the afternoon, one session on visualization with R as well as a capstone-alike session on reproducible research rounded out the second day.

Things that worked

The experience of the instructors showed, as the material was presented and an effective manner. The selection of topics, as well as the pace were seen by most students to be appropriate and just right. Karthik and Ramnath both did an outstanding job.

No students experienced any real difficulties installing software, or using the supplied information. Participants were roughly split between Windows and OS X laptops, and had generally no problem with bash, git, or R via RStudio.

The overall Software Carpentry setup, the lesson layout, the focus on hands-on exercises along with instruction, the use of the electronic noteboard provided by etherpad and, of course, the tried-and-tested material worked very well.

Things that could have worked better

Even more breaks for exercises could have been added. Students had difficulty staying on pace in some of the exercise: once someone fell behind even for a small error (maybe a typo) it was sometimes hard to catch up. That is a general problem for hands-on classes. I feel I could have done better with the scope of my two session.

Even more cohesion among the topics could have been achieved via a single continually used example dataset and analysis.

Acknowledgments

Robert McDonald from Kellogg, and Jackie Milhans from Research Computing IT, were superb hosts and organizers. Their help in preparing for the workshop was tremendous, and the pick of venue was excellent, and allowed for a stress-free two days of classes. We could not have done this without Karthik and Ramnath, so a very big Thank You to both of them. Last but not least the Software Carpentry 'head office' was always ready to help Bob, Jackie and myself during the earlier planning stage, so another big Thank You! to Greg and Arliss.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: RPushbullet 0.1.1

Wed, 2014-11-05 21:17

A minor bugfix release 0.1.1 of the RPushbullet package (interfacing the neat Pushbullet service) landed on CRAN yesterday morning.

It cleans up a small issue related to the ability to transfer files between devices via the Pushbullet service where the ability to select a (non-default) target device has now been restored.

With that, allow me to borrow one excellent use case of RPushbullet from the blog of Karl Broman: how to use RPushbullet for error notifications from R:

options(error = function() { library(RPushbullet) pbPost("note", "Error", geterrmessage()) })

This is very clever: should an error occur, you get immediate notification in browser or on your phone. Left as an exercise for the reader is to combine this with the equally excellent rfoaas package (github|cran) to get appropriately colourful error messages...

More details about the package are at the RPushbullet webpage and the RPushbullet GitHub repo.

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Categories: FLOSS Project Planets

Lisandro Dami&aacute;n Nicanor P&eacute;rez Meyer: Early announce: Qt4 removal in Jessie+1

Wed, 2014-11-05 20:56
We the Debian Qt/KDE Team want to early-announce [maintainer warning] our decision to remove Qt4 from Jessie+1. This warning is mostly targeted at upstreams.

Qt4 has been deprecated since Qt5's first release on December 19th 2012, that means almost two years ago!

So far we had bugfixes-only releases, but upstream has announced that they will end this support on august 2015. This already means we will have to do a special effort from that point on for Jessie in case RC bugs appears, so having it in Jessie+1 is simply a non-go.

Some of us where involved in various Qt4 to Qt5 migrations [0] and we know for sure that porting stuff from Qt4 to Qt5 is much much easier and less painful than it was from Qt3 to Qt4.

We also understand that there is still a lot of software still using Qt4. In order to easy the transition time we have provided Wheezy backports for Qt5.

Don't forget to take a look at the C++ API change page [1] whenever you start porting your application.

[0] http://perezmeyer.blogspot.com.ar/2014/03/porting-qt-4-apps-to-qt-5-example-with.html
[1] http://qt-project.org/doc/qt-5.0/qtdoc/sourcebreaks.html

[maintainer warning] **Remember the freeze** and do not upload packages ported to Qt5 to unstable. The best thing you can do now is to ask your upstream if the code can be compiled against Qt5 and, why not, try it yourself.

Our first priority now is to release Jessie, and this is why this is an early announce.
Categories: FLOSS Project Planets

Rapha&#235;l Hertzog: My Free Software Activities in October 2014

Wed, 2014-11-05 10:19

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donators (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Packaging work

With the Jessie freeze approaching, I took care of packaging some new upstream releases that I wanted to get in. I started with zim 0.62, I had skipped 0.61 due to some annoying regressions. Since I had two bugs to forward, I took the opportunity to reach out to the upstream author to see if he had some important fixes to get into Jessie. This resulted in me pushing another update with 3 commits cherry picked from the upstream VCS. I also sponsored a wheezy-backports of the new version.

I pushed two new bugfixes releases of Publican (4.2.3 and 4.2.6) but I had to include a work-around for a bug that I reported earlier on docbook-xml (#763598: the XML catalog doesn’t allow libxml2/xmllint to identify the local copy of some entities files) and that is unlikely to be fixed in time for Jessie.

Last but not least, I pushed the first point release of Django 1.7, aka version 1.7.1 to unstable and asked release managers to ensure it migrates to testing before the real freeze. This is important because the closer we are to upstream, the easier it is to apply security patches during the lifetime of Jessie (which will hopefully be 5 years, thanks to Debian LTS!). I also released a backport of python-django 1.7 to wheezy-backports.

I sponsored galette 0.7.8+dfsg-1 fixing an RC bug so that it can get back to testing (it got removed from testing due to the bug).

Debian LTS

See my dedicated report for the paid work I did on that area. Apart from that, I took some time to get in touch with all the Debian consultants and see if they knew some companies to reach out. There are a few new sponsors in the pipe thanks to this, but given the large set of people that it represents, I was expecting more. I used this opportunity to report all bogus entries (i.e bouncing email, broken URL) to the maintainer of the said webpage.

Distro Tracker

Only 30 commits this month, with almost no external contribution, I’m a bit saddened by this situation because it’s not very difficult to contribute to this project and we have plenty of easy bugs to get you started.

That said I’m still happy with the work done. Most of the changes have been made for Kali but will be useful for all derivatives: it’s now possible to add external repositories in the tracker and not display them in the list of available versions, and not generate automatic news about those repositories. There’s a new “derivative” application which is only in its infancy but can already provide a useful comparison of a derivative with its parent. See it in action on the Kali Package Tracker: http://pkg.kali.org/derivative/ Thanks to Offensive Security which is sponsoring this work!

Since I have pushed Django 1.7 to wheezy-backports, all distro tracker instances that I manage are now running that version of Django and I opted to make that version mandatory. This made it possible to add initial Django migrations and rely on this new feature for future database schema upgrade (I have voluntarily avoided schema change up to now to avoid problems migrating from South to Django migrations).

Thanks

See you next month for a new summary of my activities.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

Categories: FLOSS Project Planets

Carl Chenet: Send the same short message on Twitter, Pump.io, Diaspora*… and a lot more

Wed, 2014-11-05 09:58

Follow me on Identi.ca  or Twitter  or Diaspora*

This is a feedback about installing a self hosted instance of Friendica on a Debian server (Jessie). If you’re not interested in why I use Friendica, just go to « Prerequisite for Friendica » section below.

Frustration about social networks

Being a huge user of short messages, I was quite frustated to spend so much time on my Twitter account. To be quite honest, there is no much I like about this social network, except the huge population of people being potentially interested in what I write.

I also have been using Identi.ca (now powered by Pump.io) for a while. But I tried for a while to manage both networks Pump.io and Twitter by hand and it was quite painful. And something was telling me another social network was going to appear from nowhere one of these days and I’ll be just horrible to try to keep it up this way.

So I was looking for a « scalable » solution not asking too much personal investment. Subscribing to Diaspora* some days ago on the Framasphere pod, tintouli told me to try Friendica.

Hmmm, what’s Friendica ?

Friendica is a content manager you can plug on almost anything: social networks (Facebook, Twitter, Pump.io, Diaspora*,…), but also WordPress, XMPP, emails… I’m in fact just discovering the power of this tool but to plug it on my different social network accounts was quite a good use case for me. And I guess if you’re still reading, for you too.

I tried to use some shared public servers but I was not quite happy with the result, one connector was still missing or public servers were really unstable. So I’m at last self hosting my Friendica. Here is how.

Prerequisite for Friendica

You need to install the following packages:

# apt-get install apache2 libapache2-mod-php5 php5 php5-curl php5-gd php5-mysql mysql-server git

Having a already self-modified /etc/php5/apache2/php.ini, I encountered a small issue with libCurl and had to manually add the following line in the php.ini:

extension=curl.so

Setting up MySQL

Connect to MySQL and create an empty database with a dedicated user:

# mysql -u root -pV3rYS3cr3t -e « create database friendica; GRANT ALL PRIVILEGES ON friendica.* TO friendica@localhost IDENTIFIED BY ‘R3AlLyH4rdT0Gu3ss' »

Setting up Apache

My server hosts several services, so I use a subdomain friendica.mydomain.com. If you use a subdomain, of course check you do have declared this subdomain in your DNS zone.

I use SSL encryption with a wildcard certificate for all my subdomains. My Friendica data are stored in /var/www/friendica. Here is my virtual host configuration for Friendica stored in the file /etc/apache2/sites-available/friendicassl.conf :

<VirtualHost *:443>
ServerName friendica.mydomain.com
DocumentRoot /var/www/friendica/
DirectoryIndex index.php index.html
ErrorLog /var/log/apache2/friendica-error-ssl.log
TransferLog /var/log/apache2/friendica-access-ssl.log

SSLEngine on
SSLCertificateFile /etc/ssl/certs/mydomain/mydomain.com.crt
SSLCertificateKeyFile /etc/ssl/private/mydomain.com.key
SSLVerifyClient None

<Directory /var/www/friendica/>
AllowOverride All
Options FollowSymLinks
Order allow,deny
Allow from all
</Directory>
</VirtualHost>

After writing the configuration file, just launch the following commands and it should be good for the Apache configuration:

# a2ensite friendicassl && /etc/init.d/apache2 reload

Setting up Friendica

Get the master zip file of Friendica, copy it on your server and decompress it. Something like :

# cd /var/www/ && wget https://github.com/friendica/friendica/archive/master.zip && unzip master.zip && mv friendica-master friendica

You need to give www-data (Apache user) the rights to write in /var/www/friendica/view/smarty3/ :

# chown -R www-data:www-data /var/www/friendica/view/smarty3 && chmod -R ug+w var/www/friendica/view/smarty3

Ok, I guess we’re all set, lets launch the installation process! Using your web browser, connect to friendica.mydomain.com. First step you’ll see the installation window which checks the prerequisite before installing. Complete if something is missing.

First window of the Friendica installation process

Second step asks the host/user/password of the database, complete and the installation process starts. Hopefully all goes just fine.

Next you’ll have to create a /var/www/friendica/.htconfig.php with the content that the last page of the installation process provides. Just copy/paste, check the rights of this file and now you can connect again to see the register page of friendica at the url https://friendlica.mydomain.com/register . Pretty cool!

Register a user

That’s a fairly easy step. You just need to check before that your server is able to send emails, because the password is going to be sent to you by email. If it is ok, you should now identify on the welcome page of friendica and access your account. That’s a huge step to broadcast your short messages everywhere, but we have some last steps before being able to send your short messages on all the social networks we need.

A small break, create an app for twitter on apps.twitter.com

To send your short messages to Twitter, you need to create an app on apps.twitter.com. Just check you’re logged in Twitter and connect to apps.twitter.com. Create an app called with a unique name (apparently), then go to the Keys and Access tokens page, note the consumer key and the consumer secret. You’ll later need the name of the app, the consumer key and the consumer secret.

Install and configure the addons

Friendica uses an addon system in order to plug on the different third-parties it needs. We are going to configure Twitter, Pump.io and the Diaspora* plug. Let’s go back to our server and launches some commands:

# cd /tmp && git clone https://github.com/friendica/friendica-addons.git && cd friendica-addons

# tar xvf twitter.tgz -C /var/www/friendica/addon

# tar xvf pumpio.tgz -C /var/www/friendica/addon

# cp -a diaspora /var/www/friendica/addon

You need to modify your /var/www/friendica/.htconfig.php file and add the following content at the end:

// names of your addons, separated by a comma

$a->config['system']['addon'] = ‘pumpio, twitter, diaspora';

// your Twitter consumer key
$a->config['twitter']['consumerkey'] = ‘P4Jl2Pe4j7Lj91eIn0AR8vIl2′;

// your Twitter consumer secret
$a->config['twitter']['consumersecret'] = ‘1DnVkllPik9Ua8jW4fncxwtXZJbs9iFfI5epFzmeI8VxM9pqP1′;

// you Twitter app name

$a->config['twitter']['application_name'] = « whatever-twitter »;

Connect again to Friendica. Go to settings => social networks, you will see the options for Twitter, Pump.io and Diaspora*. Complete the requested information for each of them. Important options you should not forget to check are:

Pump.io

  • Enable pump.io Post Plugin
  • Post to pump.io by default
  • Should posts be public?

Twitter

  • Authorize publication on Twitter
  • Post to Twitter by default

Diaspora*

  • Post to Diaspora by default

Done? Now it’s time to send your first broadcasted short message. Yay!

Send a short message to your different social networks

Connect to Friendica, click on the network page, write your short message in the « Share » box. Click on the lock , you’ll see the following setup:

It means your short messages will be broadcasted to the three networks. Or more, it’s up to you! That’s my setup, feel free to modify. Now close the lock window and send your message. For me it takes some time to appear on Twitter and Diaspora* and it immediatly appears on Identi.ca.

Last words

Friendica offers to take back the control of your data, by broadcasting content on different media from a single source. While self hosting, you keep your data whatever happends and are not subject to companies losing your data like recently Twitpic. Moreover the philosophy behind Friendica pushed me to dig and test the solution

What about you? How do you proceed to broadcast your short messages? Does Friendica offer a good solution in your opinion? Are you interested in the philosophy behind this project? Feel free to share your thoughs in the comments.

LAST MINUTE: hey, this article is on Hacker News, don’t hesitate to vote for it if you liked it!


Categories: FLOSS Project Planets

Peter Eisentraut: Checking whitespace with Git

Tue, 2014-11-04 20:00

Whitespace matters.

Git has support for checking whitespace in patches. git apply and git am have the option --whitespace, which can be used to warn or error about whitespace errors in the patches about to be applied. git diff has the option --check to check a change for whitespace errors.

But all this assumes that your existing code is cool, and only new changes are candidates for problems. Curiously, it is a bit hard to use those same tools for going back and checking whether an existing tree satisfies the whitespace rules applied to new patches.

The core of the whitespace checking is in git diff-tree. With the --check option, you can check the whitespace in the diff between two objects.

But how do you check the whitespace of a tree rather than a diff? Basically, you want

git diff-tree --check EMPTY HEAD

except there is no EMPTY. But you can compute the hash of an empty Git tree:

git hash-object -t tree /dev/null

So the full command is

git diff-tree --check $(git hash-object -t tree /dev/null) HEAD

If have this as an alias in my ~/.gitconfig:

[alias] check-whitespace = !git diff-tree --check $(git hash-object -t tree /dev/null) HEAD

Then running

git check-whitespace

can be as easy as running make or git commit.

Categories: FLOSS Project Planets

Marco d'Itri: My position on the "init system coupling" General Resolution

Tue, 2014-11-04 13:36

I first want to clarify for the people not intimately involved with Debian that the GR-2014-003 vote is not about choosing the default init system or deciding if sysvinit should still be supported: its outcome will not stop systemd from being Debian's default init system and will not prevent any interested developers from supporting sysvinit.

Some non-developers have recently threatened of "forking Debian" if this GR will not pass, apparently without understanding well the concept: Debian welcomes forks and I think that having more users working on free software would be great no matter which init system they favour.

The goal of Ian Jackson's proposal is to force the maintainers who want to use the superior features of systemd in their packages to spend their time on making them work with sysvinit as well. This is antisocial and also hard to reconcile it with the Debian Constitution, which states:

2.1.1 Nothing in this constitution imposes an obligation on anyone to do work for the Project. A person who does not want to do a task which has been delegated or assigned to them does not need to do it. [...]

As it has been patiently explained by many other people, this proposal is unrealistic: if the maintainers of some packages were not interested in working on support for sysvinit and nobody else submitted patches then we would probably still have to release them as is even if formally declared unsuitable for a release. On the other hand, if somebody is interested in working on sysvinit support then there is no need for a GR forcing them to do it.

The most elegant outcome of this GR would be a victory of choice 4 ("please do not waste everybody's time with pointless general resolutions"), but Ian Jackson has been clear enough in explaining how he sees the future of this debate:

If my GR passes we will only have to have this conversation if those who are outvoted do not respect the project's collective decision.

If my GR fails I expect a series of bitter rearguard battles over individual systemd dependencies.

There are no significant practical differences between choices 2 "support alternative init systems as much as possible" and 3 "packages may require specific init systems if maintainers decide", but the second option is more explicit in supporting the technical decisions of maintainers and upstream developers.

This is why I think that we need a stronger outcome to prevent discussing this over and over, no matter how each one of us feels about working personally on sysvinit support in the future. I will vote for choices 3, 2, 4, 1.

Categories: FLOSS Project Planets

Junichi Uekawa: emacs-gdb interfaces.

Tue, 2014-11-04 03:10
emacs-gdb interfaces. emacs and gdb interfaces have evolved. I think -fullname option is old interface used by gud-gdb. I think there's also --annotate=3 which was used in emacs23. emacs24 M-x gdb uses -i=mi. Some things in M-x gdb annoy me but I'm not finding a good docuemntation on this matter. Hmm..

Categories: FLOSS Project Planets