Now you can track it via tracker, thanks to release team to file it.
In the past, we have had multiple heated discussions involving systemd. We (the pkg-systemd-maintainers team) would like to better understand why some people dislike systemd.
Therefore, we have created a survey, which you can find at http://survey.zekjur.net/index.php/391182
Please only submit your feedback to the survey and not this thread, we are not particularly interested in yet another systemd discussion at this point.
The deadline for participating in that survey is 7 days from now, that is 2013-05-26 23:59:00 UTC.
Please participate only if you consider yourself an active member of the Debian community (for example participating in the debian-devel mailing list, maintaining packages, etc.).
Of course, we will publish the results after the survey ends.
the Debian systemd maintainers
Apparently (hi Zhenech, found on Plänet Debian), a Man does not only need to fork a child, plant a tree, etc. in their life but also write a DynDNS service. Perfect for opening a new tag in the wlog called archæology (pagetable.com – Some Assembly Required is also a nice example for these).
Once upon a time, I used SixXS’ heartbeat protocol client for updating the Legacy IP (known as “IPv4” earlier) endpoint address of my tunnel at home (My ISP offers static v4 for some payment now, luckily). Their client sucked, so I wrote on in ksh, naturally.
And because mksh(1) is such nice a language to program in (although, I only really begun becoming proficient in Korn Shell in 2005-2006 or so, thus please take those scripts with a grain of salt, I’d do them much differently nowadays) I also wrote a heartbeat server implementation. In Shell.
The heartbeat server supports different backends (per client), and to date I’ve run backends providing DynDNS (automatically disabling the RR if the client goes offline), an IP (IPv6) tunnel of my own (basically the same setup SixXS has, without knowing theirs), rdate(8) based time offset monitoring for ntpd(8), and an eMail forwarding service (as one must not run an MTA on dynamic IP) with it; some of these even in parallel.
Not all of it is documented, but I’ve written up most things in CVS. There also were some issues (mostly to do with killing sleep(1)ing subprocesses not working right), so it occasionally hung, but very rarely. Running it under the supervise of DJB dæmontools was nice, as I was already using djbdns, since I do not understand the BIND zone file format and do not consider MySQL a database (and did not even like databases at all, back then). For DynDNS, the heartbeat server’s backend simply updated the zone file (by either adding or updating or deleting the line for the client) then running tinydns-data, then rsync’ing it to the djbdns server primary and secondaries, then running zonenotify so the BIND secondaries get a NOTIFY to update their zones (so I never had to bother much with the SOA values, only allow AXFR). That’s a really KISS setup ☺
Anyway. This is archæology. The scripts are there, feel free to use them, hack on them, take them as examples… even submit back patches if you want. I’ll even answer questions, to some degree, in IRC. But that’s it. I urge people to go use a decent ISP, even if the bandwidth is smaller. To paraphrase a coworker after he cancelled his cable based internet access (I think at Un*tym*dia) before the 2-week trial period was even over: rather have slow but reliable internet at Netc*logne than “that”. People, vote with your purse!
I usually try to avoid administering printers whenever possible. As a result I end of flailing around the CUPS web interface before I figure out how to re-enable a printer. And, when I get a call to help debug a printer, I can't easily tell people what to do.
When I try to do what I need via the command line, I end up spending at least 10 or 15 minutes re-reading man pages before I piece together the steps.
Here's my attempt to document the steps so I don't have to re-read man pages.Setup
In these examples, the printer name in question is: stability and it is a network printer, with local DNS that properly resolves the hostname stability to an IP address.
The cups commands in these examples can be run as a non-root user if that user is in the lpadmin group.
To see if lpadmin is listed. If not:sudo adduser <your-user-name> lpadmin
Then, to gain access to the new group without logging out and logging in again:newgrp lpadmin Network access
First, try to ping the printer:ping stability
If this fails, restart the printer and/or check network cables. No point in doing anything else until it responds to pings.Can't submit new jobs to the printer
Next, if the problem is that the printer is greyed out when you try to print a document or your application tells you that the printer is rejecting jobs, confirm this status with:lpstat -a stability
It will either output:stability accepting requests since Mon 20 May 2013 10:28:57 AM EDT
Orstability not accepting requests since Mon 20 May 2013 10:28:57 AM EDT - Rejecting Jobs
If it is rejecting jobs, try:/usr/sbin/cupsaccept stability Accepts new jobs, but just doesn't print
On the other hand, if the printer is accepting jobs, but the jobs are not printing, find out if the printer is enabled with:lpstat -p stability
You should get either:printer stability is idle. enabled since Mon 20 May 2013 10:28:57 AM EDT
Or:printer stability disabled since Mon 20 May 2013 10:35:10 AM EDT - Paused
If it is disabled, you should first see what queued jobs there are:lpq
If you have a list of duplicate pending jobs, be sure to delete the duplicates to avoid having your print job come out multiple times.
To delete a queued job, type the following (n should be the number in the Job column of the lpq output):cancel <n>
After you have deleted duplicate jobs, try "enabling" it:/usr/sbin/cupsenable stability
Then, re-rerun the lpq command and see if it's now "ready." At this point, the jobs should start printing.Review of concepts
For review... a few important concepts:
- cupsaccept/cupsreject: controls whether a printer will accept or reject new jobs. It doesn't matter whether the printer is enabled or disabled.
- cupsenable/cupsdisable: controls whether a printer will print existing jobs. It doesn't matter whether the print is accepting or rejecting new jobs.
gpg gets it absolutely right by not asking users this question by default. People should not be enabling this option.
Some background: gpg's --ask-cert-level option allows the user who is making an OpenPGP identity certification to indicate just how sure they are of the identity they are certifying. The user's choice is then mapped into four levels of OpenPGP certification of a User ID and Public-Key packet, which i'll refer to by their signature type identifiers in the OpenPGP spec:
- 0x10: Generic certification
- The issuer of this certification does not make any particular assertion as to how well the certifier has checked that the owner of the key is in fact the person described by the User ID.
- 0x11: Persona certification
- The issuer of this certification has not done any verification of the claim that the owner of this key is the User ID specified.
- 0x12: Casual certification
- The issuer of this certification has done some casual verification of the claim of identity.
- 0x13: Positive certification
- The issuer of this certification has done substantial verification of the claim of identity.
Most OpenPGP implementations make their "key signatures" as 0x10 certifications. Some implementations can issue 0x11-0x13 certifications, but few differentiate between the types.
By default (if --ask-cert-level is not supplied), gpg issues certificates ("signs keys") using 0x10 (generic) certifications, with the exception of self-sigs, which are made as type 0x13 (positive).
When interpreting certifications, gpg does distinguish between different certifications in one particular way: 0x11 (persona) certifications are ignored; other certifications are not. (users can change this cutoff with the --min-cert-level option, but it's not clear why they would want to do so).
So there is no functional gain in declaring the difference between a "normal" certification and a "positive" one, even if there were a well-defined standard by which to assess the difference between the "generic" and "casual" or "positive" levels; and if you're going to make a "persona" certification, you might as well not make one at all.
And it gets worse: the problem is not just that such an indication is functionally useless; encouraging people to make these kind of assertions actively encourages leaks of a more-detailed social graph than just encouraging everyone to use the default blanket 0x13-for-self-sigs, 0x10-for-everyone-else policy.
A richer public social graph means more data that can feed the ravenous and growing appetite of the advertising-and-surveillance regimes. i find these regimes troubling. I admit that people often leak much more information than this indication of "how well do you know X" via tools like Facebook, but that's no excuse to encourage them to leak still more or to acclimatize people to the idea that the details of their personal relationships should by default be public knowledge.
Lastly, the more we keep the OpenPGP network of identity certifications (a.k.a. the "web of trust") simple, the easier it is to make sensible and comprehensible and predictable inferences from the network about whether a key really does belong to a given user. Minimizing the complexity and difficulty of deciding to make a certification helps people streamline their signing processes and reduces the amount of cognitive overhead people spend just building the network in the first place.
You may not know this, but I am a huge PowerDNS fan. This may be because it is so simple to use, supports different databases as backends or maybe just because I do not like BIND, pick one.
I also happen to live in Germany where ISPs usually do not give static IP-addresses to private customers. Unless you pay extra or limit yourself to a bunch of providers that do good service but rely on old (DSL) technology, limiting you to some 16MBit/s down and 1MBit/s up. Luckily my ISP does not force the IP-address change, but it does happen from time to time (once in a couple of month usually). To access the machine(s) at home while on a non-IPv6-capable connection, I have been using my old (old, old, old) DynDNS.com account and pointing a CNAME from under die-welt.net to it.
Some time ago, DynDNS.com started supporting AAAA records in their zones and I was happy: no need to type hostname.ipv6.kerker.die-welt.net to connect via v6 — just let the application decide. Well, yes, almost. It’s just DynDNS.com resets the AAAA record when you update the A record with ddclient and there is currently no IPv6 support in any of the DynDNS.com clients for Linux. So I end up with no AAAA record and am not as happy as I should be.
Last Friday I got a mail from DynDNS:
Starting now, if you would like to maintain your free Dyn account, you must now log into your account once a month. Failure to do so will result in expiration and loss of your hostname. Note that using an update client will no longer suffice for this monthly login. You will still continue to get email alerts every 30 days if your email address is current.
Yes, thank you very much…
Given that I have enough nameservers under my control and love hacking, I started writing an own dynamic DNS service. Actually you cannot call it a service. Or dynamic. But it’s my own, and it does DNS: powerdyn. It is actually just a script, that can update DNS records in SQL (from which PowerDNS serves the zones).
When you design such a “service”, you first think about user authentication and proper information transport. The machine that runs my PowerDNS database is reachable via SSH, so let’s use SSH for that. You do not only get user authentication, server authentication and properly crypted data transport, you also do not have to try hard to find out the IP-address you want to update the hostname to, just use $SSH_CLIENT from your environment.
If you expected further explanation what has to be done next: sorry, we’re done. We have the user (or hostname) by looking at the SSH credentials, and we have the IP-address to update it to if the data in the database is outdated. The only thing missing is some execution daemon or … cron(8). :)
The machine at home has the following cron entry now:*/5 * * * * ssh -4 -T -i /home/evgeni/.ssh/powerdyn_rsa email@example.com
This connects to the machine with the database via v4 (my IPv6 address does not change) and that’s all.
As an alternative, one can add the ssh call in /etc/network/if-up.d/, /etc/ppp/ip-up.d/ or /etc/ppp/ipv6-up.d (depending on your setup) to be executed every time the connection goes up.
The machine with the database has the following authorized_keys entry for the powerdyn user:no-agent-forwarding,no-port-forwarding,no-pty,no-X11-forwarding,no-user-rc,\ command="/home/powerdyn/powerdyn/powerdyn dorei.kerker.die-welt.net" ssh-rsa AAAA... evgeni@dorei
By forcing the command, the user has no way to get the database-credentials the script uses to write to the database and neither cannot update a different host. That seems secure enough for me. It won’t scale for a setup as DynDNS.com and the user-management sucks (you even have to create the entries in the database first, the script can only update them), but it works fine for me and I bet it would for others too :)
Update: included suggestions by XX and Helmut from the comments.
Or rather, hello Planet!
Here’s a somewhat traditional introductory post.
I’m Nicolas Dandrimont, I’m French, I’m sysadmin in a grande école, where I’m mostly in charge of the GNU/Linux workstations and servers.
In Debian, I’m a DM, currently in the NM queue, so I might become a DD soon-ish. I am (rather inactively) co-maintaining a few packages. In my Debian “career”, I have been involved in OCaml packaging and Python packaging, although lately most of my time has been spent on Google Summer of Code (mentor for two mentors.debian.net projects in 2012, org admin for Debian in 2013), and on mentors.debian.net.
In other free-software related projects, I own a RepRap 3D printer, and I grew some interest in the related software, e.g. Slic3r and printrun. There have been a lot of action in Fedora about packaging 3D-printing-related software, and it’d be great to get a team together to work on that in Debian during the jessie release cycle. Consider this a call for interested parties
Hopefully I’ll be able to make regular updates on the work I do in Debian and free software, so stay tuned!
besides working on the preparation of the Perl 5.18 transition, I also looked into some RC bugs:
- #542564 – xmlroff: "xmlroff: uses libgnomeprint which is scheduled for removal"
drop build dependency and disable in ./configure, upload to DELAYED/2
- #665506 – src:ario: "ario: Including individual glib headers no longer supported"
apply patch from Michael Biebl, upload to DELAYED/2, overriden by a faster upload of another bug squashing DD
- #665530 – src:getstream: "getstream: Including individual glib headers no longer supported"
add patch from Michael Biebl, upload to DELAYED/2
- #665555 – src:gxine: "gxine: Including individual glib headers no longer supported"
add info about next build failure to bug report
- #665573 – src:librcc: "librcc: Including individual glib headers no longer supported"
include patch from Colin Watson, upload to DELAYED/2
- #665579 – src:meanwhile: "meanwhile: Including individual glib headers no longer supported"
apply patch from Michael Biebl, upload to DELAYED/2
- #665609 – src:sagasu: "sagasu: Including individual glib headers no longer supported"
apply patch from Michael Biebl, upload to DELAYED/2
- #665628 – src:xmlroff: "xmlroff: Including individual glib headers no longer supported"
apply patch from Michael Biebl, upload to DELAYED/2
- #707686 – dhelp: "dhelp: FTBFS and uninstallable in sid: needs ruby-gettext"
upload last week's patch to DELAYED/2
- #708598 – src:libgeo-ip-perl: "libgeo-ip-perl: FTBFS: CAPI must be at least 1.4.8 - Please update"
upload new upstream release (pkg-perl)
- #708730 – libanyevent-perl: "libanyevent-perl: architecture specific constants in an arch:all package (again)"
switch back to arch:any (pkg-perl)
- #708766 – libimager-qrcode-perl: "libimager-qrcode-perl: Update for newer libimager-perl needed"
file a bug with patch (update for newer libimager-perl)
Today, I played at TC Cantincrode in Mortsel, Belgium, in the first round. This is the first year I'm playing tennis competitively, so I was expecting to lose by a pretty wide margin. Now while I didn't win, the margin wasn't as wide as I'd expected; 6/4 - 6/3 isn't too bad for the non-ranked beginner that I am. For comparison: I lost my previous match with 6/2 - 6/0, and I was not unhappy about that.
Part of this was due to my opponent (by his own admission) not playing his best; but still, I'm quite happy about my result here.
My next match probably won't be as good. Oh well.
I use RSS feeds to keep up with academic journals. Because of an undocumented and unexpected feature (bug?) in my (otherwise wonderful) free software newsreader NewsBlur, many articles published over the last year were marked as having been read before I saw them.
Over the last week, I caught up. I spent hours going through abstracts and downloading papers that looked interesting or relevant to my research. Because I did this for hundreds of articles, it gave me an unusual opportunity to reflect on my journal reading practices in a systematic way.
On a number of occasions, there were potentially interesting articles in non-open access journals that neither MIT nor Harvard subscribes to and that were otherwise not accessible to me. In several cases where the research was obviously important to my work, I made an interlibrary request, emailed the papers’ authors for copies, or tracked down a colleague at an institution with access.
Of course, articles that look potentially interesting from the title and abstract often end up being less relevant or well executed on closer inspection. I tend to cast a wide net, skim many articles, and put them aside when it’s clear that the study is not for me. This week, I downloaded many of these possibly relevant papers to, at least, give a skim. But only if I could download them easily. On three or four occasions, I found inaccessible articles at this margin of relevance. In these cases, I did not bother trying to track down the articles.
Of course, what appear to be marginally relevant articles sometimes end up being a great match for my research and I will end up citing and building on the work. I found several suprisingly interesting papers last week. The articles that were locked up have no chance at this.
When people suggest that open access hinders the spread of scholarship, a common retort is that the people who need the work have or can finagle access. For the papers we know we need, this might be true. As someone with access to two of the most well endowed libraries in academia who routinely requests otherwise inaccessible articles through several channels, I would have told you, a week ago, that locked-down journals were unlikely to keep me from citing anybody.
So it was interesting watching myself do a personal cost calculation in a way that sidelined published scholarship — and that open access publishing would have prevented. At the margin of relevance to ones research, open access may make a big difference.
I've been trying for three weeks to live-stream the picture from the camera onto the local network. I have tried crtmpserver and vlc, read several dozens of how-tos, but so far I have not been able to get a streaming setup working, no matter what I tried.
Hence my plea to the lazy web: does anyone have such a setup running on top of Debian? Would you please let me know how you did it?
Thanks a lot!
NP: Eels: End Times
Support is included in 1.0.0 for building Debian packages using sbuild in response to subversion commits or changes in firstname.lastname@example.org (by using apt as a version control handler) for any architecture and build environment which sbuild can support. There is also an example git commit template. Pybit has been designed to be fully extensible, so support for RPM or other package formats can be added as well as other version control handlers, other build environments and other architectures. Pybit is also scalable, when one type of client is struggling with the workload, another machine of the same architecture can be added to the pool to share the load. Pybit can also build a package for any number of architectures and build environments at the same time. The Pybit web interface provides an at-a-glance summary of all current builds as well as options to blacklist certain combinations, cancel and retry specific jobs and add monitor each pybit client. Current use cases include:
- Rapidly changing VCS - one or more subversion repositories with lots of Debian packages, built automatically for any number of build environments and architectures every time the debian/changelog is modified. Clean chroot builds provide continuous integration testing of the every package.
- Rebuilding the archive with different compilers or flags - a dedicated email account subscribed to email@example.com feeding messages through procmail to the changes-debian hook, passing build requests to the apt handler to rebuild each package in your own sbuild chroots, using whatever environments, suites and build options can be configured within those chroots.
- something else we haven't thought of yet ... there is scope for a lot more hooks, package formats, chroot tools and handler plugins.
Most UNIX users have heard of the nice utility used to run a command with a lower priority to make sure that it only runs when nothing more important is trying to get a hold of the CPU:nice long_running_script.sh
That's only dealing with part of the problem though because the CPU is not all there is. A low priority command could still be interfering with other tasks by stealing valuable I/O cycles (e.g. accessing the hard drive).Prioritizing I/O
Another Linux command, ionice, allows users to set the I/O priority to be lower than all other processes.
Here's how to make sure that a script doesn't get to do any I/O unless the resource it wants to use is idle:sudo ionice -c3 hammer_disk.sh
The above only works as root, but the following is a pretty good approximation that works for non-root users as well:ionice -n7 hammer_disk.sh
You may think that running a command with both nice and ionice would have absolutely no impact on other tasks running on the same machine, but there is one more aspect to consider, at least on machines with limited memory: the disk cache.Polluting the disk cache
If you run a command (for example a program that goes through the entire file system checking various things, you will find that the kernel will start pulling more files into its cache and expunge cache entries used by other processes. This can have a very significant impact on a system as useful portions of memory are swapped out.
For example, on my laptop, the nightly debsums, rkhunter and tiger cron jobs essentially clear my disk cache of useful entries and force the system to slowly page everything back into memory as I unlock my screen saver in the morning.
Thankfully, there is now a solution for this in Debian: the nocache package.
This is what my long-running cron jobs now look like:nocache ionice -c3 nice long_running.sh Turning off disk syncs
Another relatively unknown tool, which I would certainly not recommend for all cron jobs but is nevertheless related to I/O, is eatmydata.
If you wrap it around a command, it will run without bothering to periodically make sure that it flushes any changes to disk. This can speed things up significantly but it should obviously not be used for anything that has important side effects or that cannot be re-run in case of failure.
After all, its name is very appropriate. It will eat your data!
Thanks to DoctorMo for the hilarious photo. It’s just so good.
We’ve got Classes working, the usual fixes from the ‘crew, and native macros. Huzzah!
I’ve had to take the site down for now (well, stop updating it) because of a vulnerability I introduced (macros allow arbitrary code to run), which means, if anyone’s keen, they should add the sandboxing code to the Hy Site as well!
More coming soon!
If I did everything right, this post will not appear on any RSS feed yet still make it to my blog to maintain history.
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
- Affecting Jessie:
214 That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
183 Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 43 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 15 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
- 125 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
- Affecting Jessie only: 31 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 183 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 214 That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze release cycle?Week Squeeze Wheezy Diff 43 284 (213+71) 468 (332+136) +184 (+119/+65) 44 261 (201+60) 408 (265+143) +147 (+64/+83) 45 261 (205+56) 425 (291+134) +164 (+86/+78) 46 271 (200+71) 401 (258+143) +130 (+58/+72) 47 283 (209+74) 366 (221+145) +83 (+12/+71) 48 256 (177+79) 378 (230+148) +122 (+53/+69) 49 256 (180+76) 360 (216+155) +104 (+36/+79) 50 204 (148+56) 339 (195+144) +135 (+47/+90) 51 178 (124+54) 323 (190+133) +145 (+66/+79) 52 115 (78+37) 289 (190+99) +174 (+112/+62) 1 93 (60+33) 287 (171+116) +194 (+111/+83) 2 82 (46+36) 271 (162+109) +189 (+116/+73) 3 25 (15+10) 249 (165+84) +224 (+150/+74) 4 14 (8+6) 244 (176+68) +230 (+168/+62) 5 2 (0+2) 224 (132+92) +222 (+132/+90) 6 release! 212 (129+83) +212 (+129/+83) 7 release+1 194 (128+66) +194 (+128/+66) 8 release+2 206 (144+62) +206 (+144/+62) 9 release+3 174 (105+69) +174 (+105/+69) 10 release+4 120 (72+48) +120 (+72/+48) 11 release+5 115 (74+41) +115 (+74/+41) 12 release+6 93 (47+46) +93 (+47/+46) 13 release+7 50 (24+26) +50 (+24/+26) 14 release+8 51 (32+19) +51 (+32/+19) 15 release+9 39 (32+7) +39 (+32/+7) 16 release+10 20 (12+8) +20 (+12/+8) 17 release+11 24 (19+5) +24 (+19/+5) 18 release+12 2 (2+0) +2 (+2/+0)
Graphical overview of bug stats thanks to azhag:
Howdy, i’m sure most people are aware of the recent release of Moblin 2.0; a user experience for netbooks. I’m going to write a few blog posts about how the Moblin user experience is built on the awesome technologies in the GNOME platform.
So first up, let’s look at the Myzone, we’re starting here since this is the first thing I really worked on in the Moblin UX and i’ve been able to see it through from early ideas to the 2.0 and 2.1 releases.
So, deep breath, the idea behind the Myzone is to provide a springboard to things that matter to you most: your recent files and web pages you’ve visited, your upcoming events and things you need to do, things that are happening on social web services and your favourite applications.
Now then, that’s the theory, how does it work:
- Recent files: Recent file information is pulled from the GtkRecentManager and the thumbnails are pulled from the XDG thumbnail specification directory. Metadata for the file comes courtesy of gio which I presume comes from shared-mime-info. Yay. By using the GtkRecentManager for all our recent activity metadata across the platform we’re allowing legacy GNOME applications to just work. Sweet.
- Events and tasks: These are pulled from EDS using libjana, a calendaring library primarily developed by Chris Lord (of Dates fame.) A couple of months back (well, uh, March) I enhanced libjana to support tasks and thus we are able reuse the existing Tasks/Dates apps for interacting with the calendar.
- Favourite apps: Here I let the side down. I use some quite crazy custom format for doing this which frankly stinks. I’m going to try and sit down with the GNOME shell guys to see if we can come up with some better way for dealing with user originated application metadata.
- Social networking/web service integration: This comes courtesy of Mojito and librest, two projects that I and the esteemed Ross Burton have been working on. Mojito is a project that pulls in content from a variety web services into a centralised place, abstracting some of the complexity and the makes it trivial to query. librest is a library for to keep developers happy even though they’re having to deal with web services. It does this by making requests and parsing the result simple.
An important aspect of the Moblin user experience is about communicating with others and this panel provides quick access to do this. The core of the content is provided by an abstraction, simplification and aggregation library called Anerley. This provides a “feed” of “items” (an addressbook of people) that aggregates across the system addressbook, powered by EDS, and your IM roster, powered by Telepathy. You have small set of actions you can do on these people such as start an IM conversation / email / edit them with Contacts. The core of our IM experience is supplied by the awesome Empathy. We’ve been working with the upstream maintainers to accomodate some of the needs of Moblin into the upstream source. This included the improvements to the accounts dialog and wizard that landed for GNOME 2.28.
One of the biggest problems with the IM experience in Moblin 2.0 was that it was easy to miss when somebody was talking to you. If you were looking away when the notification popped up, whoops, it’s gone. With our switch to Mission Control 5 I was able to integrate a Telepathy Observer into Anerley and the People Panel. An Observer will be informed of channels that are requested on the system. This allows us to show ongoing conversations in the panel and by exploiting channel requests and window presentation allow the user to switch between ongoing conversations. This wouldn’t have been possible without the assistance of the nice folks in #telepathy and at Collabora: Sjoerd, Will, Jonny and countless others.