FLOSS Project Planets

Toshio Kuratomi: Pattern or Antipattern? Splitting up initialization with asyncio

Planet Python - 3 hours 54 min ago

“O brave new world, That has such people in’t!” – William Shakespeare, The Tempest

Edit: Jean-Paul Calderone (exarkun) has a very good response to this detailing why it should be considered an antipattern. He has some great thoughts on the implicit contract that a programmer is signing when they write an __init__() method and the maintenance cost that is incurred if a programmer breaks those expectations. Definitely worth reading!

Instead of spending the Thanksgiving weekend fighting crowds of shoppers I indulged my inner geek by staying at home on my computer. And not to shop online either — I was taking a look at Python-3.4’s asyncio library to see whether it would be useful in general, run of the mill code. After quite a bit of experimenting I do think every programmer will have a legitimate use for it from time to time. It’s also quite sexy. I think I’ll be a bit prone to overusing it for a little while ;-)

Something I discovered, though — there’s a great deal of good documentation and blog posts about the underlying theory of asyncio and how to implement some broader concepts using asyncio’s API. There’s quite a few tutorials that skim the surface of what you can theoretically do with the library that don’t go into much depth. And there’s a definite lack of examples showing how people are taking asyncio’s API and applying them to real-world problems.

That lack is both exciting and hazardous. Exciting because it means there’s plenty of neat new ways to use the API that no one’s made into a wide-spread and oft-repeated pattern yet. Hazardous because there’s plenty of neat new ways to abuse the API that no one’s thought to write a post explaining why not to do things that way before. My joke about overusing it earlier has a large kernel of truth in it… there’s not a lot of information saying whether a particular means of using asyncio is good or bad.

So let me mention one way of using it that I thought about this weekend — maybe some more experienced tulip or twisted programmers will pop up and tell me whether this is a good use or bad use of the APIs.

Let’s say you’re writing some code that talks to a microblogging service. You have one class that handles both posting to the service and reading from it. As you write the code you realize that there’s some time consuming tasks (for instance, setting up an on-disk cache for posts) that you have to do in order to read from the service that you do not have to wait for if your first actions are going to be making new posts. After a bit of thought, you realize you can split up your initialization into two steps. Initialization needed for posting will be done immediately in the class’s constructor and initialization needed for reading will be setup in a future so that reading code will know when it can begin to process. Here’s a rough sketch of what an implementation might look like:

import os import sqlite import asyncio import aiohttp class Microblog: def __init__(self, url, username, token, cachedir): self.auth = token self.username = username self.url = url loop = asyncio.get_event_loop() self.init_future = loop.run_in_executor(None, self._reading_init, cachedir) def _reading_init(self, cachedir): # Mainly setup our cache self.cachedir = cachedir os.makedirs(cachedir, mode=0o755, exist_ok=True) self.db = sqlite.connect('sqlite:////{0}/cache.sqlite'.format(cachedir)) # Create tables, fill in some initial data, you get the picture [....] @asyncio.coroutine def post(self, msg): data = dict(payload=msg) headers = dict(Authorization=self.token) reply = yield from aiohttp.request('post', self.url, data=data, headers=headers) # Manipulate reply a bit [...] return reply @asyncio.coroutine def sync_latest(self): # Synchronize with the initialization we need before we can read yield from self.init_future data = dict(per_page=100, page=1) headers = dict(Authorization=self.token) reply = yield from aiohttp.request('get', self.url, data=data, headers=headers) # Stuff the reply in our cache if __name__ == '__main__': chirpchirp = Microblog('http://chirpchirp.com', 'a.badger', TOKEN, '/home/badger/cache/') loop = asyncio.get_event_loop() # Contrived -- real code would probably have a coroutine to take user input # and then submit that while interleaving with displaying new posts asyncio.async(chirpchirp.post(' '.join(sys.argv[1:]))) loop.run_until_complete(chirpchirp.sync_latest())

Some of this code is just there to give an idea of how this could be used. The real question’s revolve around splitting up initialization into two steps:

  • Is yield from the proper way for sync_latest() to signal that it needs self.init_future to finish before it can continue?
  • Is it good form to potentially start using the object for one task before __init__ has finished all tasks?
  • Would it be better style to setup posting and reading separately? Maybe a reading class and a posting class or the old standby of invoking _reading_init() the first time sync_latest() is called?

Categories: FLOSS Project Planets

Craig Small: Mudlet 3 beta

Planet Debian - Sun, 2014-12-21 23:18


A break from wordpress, I was trying to get the beta version of mudlet 3.0 compiling. On the surface the program looks a lot like the existing v2.0 that is currently within Debian.  The developers have switched from qt4 to qt5 which means a lot of dependency fun for me but I got there in the end.

As it is only a beta and not their final release, the package is located within the Debian experimental release. Once 3.0 hits a final release, I’ll switch it to sid.  If you do use the current mudlet, give 3.0 a try. I’d be interested to know what you think.

For people that have not heard of mudlet before, it is a mud client. mud stands for Multi User Dungeon, which is a multiplayer text-only game. While you can use something as simple as telnet to connect to a mud server, most people use some sort of specialised client. The advantages are you can display extra information (such a health stats) in a different window as well as aliases (macros or special commands, basically the same idea as a bash alias) and triggers (commands that are run depending on what the mud server sends you). The triggers take away some of the repeated things you may have to do, such as sipping a health tonic when your health level drops below some limit.

One thing you might notice in the experimental 3.0 beta is that the icon is missing. lintian picked up that the license was wrong; it was by-nc which meant it can’t go into main. I’ve removed the icon until upstream sorts it out.

Categories: FLOSS Project Planets


Planet Apache - Sun, 2014-12-21 22:08

I asked Jim Verhey at ReinCARnation to stop working on my bus in mid October. I didn't have a client lined up for November and couldn't afford to keep paying for it.

Today, I journeyed to Colorado Springs to talk with Jim. I hoped to convince him to give me a fixed bid to finish the project. When I got there, he surprised me with a finished paint job! You can imagine the look on my face when he opened the door and I saw this beauty!! OMG - I LOVE IT SO MUCH!! The colors are perfect and paint job is exquisite!!

Jim said he felt bad for all I’ve been through with this project and finishing it was my Christmas Present. BEST CHRISTMAS PRESENT EVER!!

Album on Flickr →

There's still more work to be done before it's street legal. However, Jim did give me a fixed-bid price to finish it. If I can afford it, the bus will be done in April 1, 2015. Then it's off to the stereo shop (1 week) and the interior shop (2 weeks). That means I could be driving it in May! YIPPEEE!! Thanks Jim - you are an awesome human being.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-12-21

Planet Apache - Sun, 2014-12-21 18:58
Categories: FLOSS Project Planets

3C Web Services: An introduction to Drupal Hooks

Planet Drupal - Sun, 2014-12-21 16:48
Drupal Hooks are extremely powerful and are one of the main attractions for many people to Drupal. These hooks allow you to view, change and work with various data at specific points of time in Drupal’s processes.
Categories: FLOSS Project Planets

Gregor Herrmann: GDAC 2014/21

Planet Debian - Sun, 2014-12-21 16:32

today I got two private mails from fellow DDs; both were personal messages & a pleasure to read. but what I also realized is that both emails were encrypted. which reminded me how hard it is to communicate securely with most people, & how easy it is with debian people. – & this is something we can be proud of!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: FLOSS Project Planets

Jean-Paul Calderone: Asynchronous Object Initialization - Patterns and Antipatterns

Planet Python - Sun, 2014-12-21 14:54

I caught Toshio Kuratomi's post about asyncio initialization patterns (or anti-patterns) on Planet Python. This is something I've dealt with a lot over the years using Twisted (one of the sources of inspiration for the asyncio developers).

To recap, Toshio wondered about a pattern involving asynchronous initialization of an instance. He wondered whether it was a good idea to start this work in __init__ and then explicitly wait for it in other methods of the class before performing the distinctive operations required by those other methods. Using asyncio (and using Toshio's example with some omissions for simplicity) this looks something like:

class Microblog:
def __init__(self, ...):
loop = asyncio.get_event_loop()
self.init_future = loop.run_in_executor(None, self._reading_init)

def _reading_init(self):
# ... do some initialization work,
# presumably expensive or otherwise long-running ...

def sync_latest(self):
# Don't do anything until initialization is done
yield from self.init_future
# ... do some work that depends on that initialization ...

It's quite possible to do something similar to this when using Twisted. It only looks a little bit difference:

class Microblog:
def __init__(self, ...):
self.init_deferred = deferToThread(self._reading_init)

def _reading_init(self):
# ... do some initialization work,
# presumably expensive or otherwise long-running ...

def sync_latest(self):
# Don't do anything until initialization is done
yield self.init_deferred
# ... do some work that depends on that initialization ...

Despite the differing names, these two pieces of code basical do the same thing:

  • run _reading_init in a thread from a thread pool
  • whenever sync_latest is called, first suspend its execution until the thread running _reading_init has finished running it

Maintenance costs

One thing this pattern gives you is an incompletely initialized object. If you write m = Microblog() then m refers to an object that's not actually ready to perform all of the operations it supposedly can perform. It's either up to the implementation or the caller to make sure to wait until it is ready. Toshio suggests that each method should do this implicitly (by starting with yield self.init_deferred or the equivalent). This is definitely better than forcing each call-site of a Microblog method to explicitly wait for this event before actually calling the method.

Still, this is a maintenance burden that's going to get old quickly. If you want full test coverage, it means you now need twice as many unit tests (one for the case where method is called before initialization is complete and another for the case where the method is called after this has happened). At least. Toshio's _reading_init method actually modifies attributes of self which means there are potentially many more than just two possible cases. Even if you're not particularly interested in having full automated test coverage (... for some reason ...), you still have to remember to add this yield statement to the beginning of all of Microblog's methods. It's not exactly a ton of work but it's one more thing to remember any time you maintain this code. And this is the kind of mistake where making a mistake creates a race condition that you might not immediately notice - which means you may ship the broken code to clients and you get to discover the problem when they start complaining about it.

Diminished flexibility

Another thing this pattern gives you is an object that does things as soon as you create it. Have you ever had a class with a __init__ method that raised an exception as a result of a failing interaction with some other part of the system? Perhaps it did file I/O and got a permission denied error or perhaps it was a socket doing blocking I/O on a network that was clogged and unresponsive. Among other problems, these cases are often difficult to report well because you don't have an object to blame the problem on yet. The asynchronous version is perhaps even worse since a failure in this asynchronous initialization doesn't actually prevent you from getting the instance - it's just another way you can end up with an incompletely initialized object (this time, one that is never going to be completely initialized and use of which is unsafe in difficult to reason-about ways).

Another related problem is that it removes one of your options for controlling the behavior of instances of that class. It's great to be able to control everything a class does just by the values passed in to __init__ but most programmers have probably come across a case where behavior is controlled via an attribute instead. If __init__ starts an operation then instantiating code doesn't have a chance to change the values of any attributes first (except, perhaps, by resorting to setting them on the class - which has global consequences and is generally icky).

Loss of control

A third consequence of this pattern is that instances of classes which employ it are inevitably doing something. It may be that you don't always want the instance to do something. It's certainly fine for a Microblog instance to create a SQLite3 database and initialize a cache directory if the program I'm writing which uses it is actually intent on hosting a blog. It's most likely the case that other useful things can be done with a Microblog instance, though. Toshio's own example includes a post method which doesn't use the SQLite3 database or the cache directory. His code correctly doesn't wait for init_future at the beginning of his post method - but this should leave the reader wondering why we need to create a SQLite3 database if all we want to do is post new entries.

Using this pattern, the SQLite3 database is always created - whether we want to use it or not. There are other reasons you might want a Microblog instance that hasn't initialized a bunch of on-disk state too - one of the most common is unit testing (yes, I said "unit testing" twice in one post!). A very convenient thing for a lot of unit tests, both of Microblog itself and of code that uses Microblog, is to compare instances of the class. How do you know you got a Microblog instance that is configured to use the right cache directory or database type? You most likely want to make some comparisons against it. The ideal way to do this is to be able to instantiate a Microblog instance in your test suite and uses its == implementation to compare it against an object given back by some API you've implemented. If creating a Microblog instance always goes off and creates a SQLite3 database then at the very least your test suite is going to be doing a lot of unnecessary work (making it slow) and at worst perhaps the two instances will fight with each other over the same SQLite3 database file (which they must share since they're meant to be instances representing the same state). Another way to look at this is that inextricably embedding the database connection logic into your __init__ method has taken control away from the user. Perhaps they have their own database connection setup logic. Perhaps they want to re-use connections or pass in a fake for testing. Saving a reference to that object on the instance for later use is a separate operation from creating the connection itself. They shouldn't be bound together in __init__ where you have to take them both or give up on using Microblog.


You might notice that these three observations I've made all sound a bit negative. You might conclude that I think this is an antipattern to be avoided. If so, feel free to give yourself a pat on the back at this point.

But if this is an antipattern, is there a pattern to use instead? I think so. I'll try to explain it.

The general idea behind the pattern I'm going to suggest comes in two parts. The first part is that your object should primarily be about representing state and your __init__ method should be about accepting that state from the outside world and storing it away on the instance being initialized for later use. It should always represent complete, internally consistent state - not partial state as asynchronous initialization implies. This means your __init__ methods should mostly look like this:

class Microblog(object):
def __init__(self, cache_dir, database_connection):
self.cache_dir = cache_dir
self.database_connection = database_connection

If you think that looks boring - yes, it does. Boring is a good thing here. Anything exciting your __init__ method does is probably going to be the cause of someone's bad day sooner or later. If you think it looks tedious - yes, it does. Consider using Hynek Schlawack's excellent characteristic package (full disclosure - I contributed some ideas to characteristic's design and Hynek ocassionally says nice things about me (I don't know if he means them, I just know he says them)).

The second part of the idea an acknowledgement that asynchronous initialization is a reality of programming with asynchronous tools. Fortunately __init__ isn't the only place to put code. Asynchronous factory functions are a great way to wrap up the asynchronous work sometimes necessary before an object can be fully and consistently initialized. Put another way:

class Microblog(object):
# ... __init__ as above ...

def from_database(cls, cache_dir, database_path):
# ... or make it a free function, not a classmethod, if you prefer
loop = asyncio.get_event_loop()
database_connection = yield from loop.run_in_executor(None, cls._reading_init)
return cls(cache_dir, database_connection)

Notice that the setup work for a Microblog instance is still asynchronous but initialization of the Microblog instance is not. There is never a time when a Microblog instance is hanging around partially ready for action. There is setup work and then there is a complete, usable Microblog.

This addresses the three observations I made above:

  • Methods of Microblog never need to concern themselves with worries about whether the instance has been completely initialized yet or not.
  • Nothing happens in Microblog.__init__. If Microblog has some methods which depend on instance attributes, any of those attributes can be set after __init__ is done and before those other methods are called. If the from_database constructor proves insufficiently flexible, it's easy to introduce a new constructor that accounts for the new requirements (named constructors mean never having to overload __init__ for different competing purposes again).
  • It's easy to treat a Microblog instance as an inert lump of state. Simply instantiating one (using Microblog(...) has no side-effects. The special extra operations required if one wants the more convenient constructor are still available - but elsewhere, where they won't get in the way of unit tests and unplanned-for uses.

I hope these points have made a strong case for one of these approaches being an anti-pattern to avoid (in Twisted, in asyncio, or in any other asynchronous programming context) and for the other as being a useful pattern to provide both convenient, expressive constructors while at the same time making object initializers unsurprising and maximizing their usefulness.

Categories: FLOSS Project Planets

More Themes Added

Planet KDE - Sun, 2014-12-21 13:47

After adding the first theme, I was working on a theme on nature. That theme represents the basic elements in the nature such as trees, flowers and etc. Since KDE Pairs is developed for pre school children the objects represented in the themes should be familiar and educational to them. Following are some screen shots after adding the theme nature to the game. These screenshots represents the different game modes such as logic, pairs, relations and words.

After drawing the elements of this I started drawing for another theme which is very similar to my previous theme (Fruits). That is vegetables.

Filed under: KDE, Open Source Tagged: KDE, KDE edu, Pairs, Season of KDE, SoK, theme designing for pairs, Themes
Categories: FLOSS Project Planets

Marley was dead: to begin with. There is no doubt whatever about that.

Planet KDE - Sun, 2014-12-21 13:37

Well, Marley might be dead, but Krita ain’t! So, here’s what’ll be the last beta build of Krita before the festive season really breaks lose. Apart from building, the Krita hackers have had a pretty good week, with lots of deep discussions on animation and coding safety, too.

Oh — and there’s a seasonal surprise for you, if you start one of these builds!

In the new builds, not quite beta 2 yet, just another build in between the last beta and the next beta, there are the following fixes:

  • Stefano Bonicatti fixed a bunch of memory issues, and is tracking more and more of them — also threading issues, and issues with AMD graphics cards.
  • Boudewijn added a new option to use Krita to convert images: krita bla.png –export –export-filename bla.jpg will convert bla.png to bla.jpg. This replaces the calligraconverter utility.
  • The ‘hairy brush’ is now called the ‘bristle brush’
  • PDF export has been fixed to export landscape as landscape and portrait as portrait, instead of everything as portrait, when using any paper size except A4. (A4 already worked fine).
  • The filter dialog (filters, filter layers, filter masks) no longer show a big thumbnail. The thumbnail was getting in the way of the filter’s name, and frequently didn’t get rendered correctly/.
  • Deleting the last node (layer or mask) in a group would lead Krita to having no node selected, causing crashes in filters. That got fixed, too.
  • On loading or creating the first image, the default tool is once again the freehand brush, and besides, the tool options are now shown.
  • Select-all is back
  • In tab-mode, the tabs now keep the right caption
  • A crash when flattening layers was fixed
  • Jouni Pentikäinen fixed the outline cursor preview of a rotating brush
  • Selecting a document size specified in anything but pixels was broken; that works now, too. Thanks to Joe for the detailed bug report!
  • Every open subwindow can now have its own exposure and gamma settings, when a HDR image is  loaded an OpenColorIO is active
  • A bunch of fixes to the tablet subsystem were committed by Dmitry. Painting should be much smoother now — and we’ve also figured out why some n-trig devices do weird stuff. They tell us their maximum pressure is 255, but then send us tablet strokes with pressure in the 0 to 1024 range!
  • Wolthera fixed an issue with darkening around the color smudge brush’s radius
  • Dmitry made it possible to drag & drop layers from one subwindow to another.
  • And made it possible again to drag & drop images from web-browsers
  • The locked status of docking panels now is kept properly
  • (But on Windows, the minimum width of the toolbox now is three icons — investgations are ongoing)
  • Scott cleaned up the precision setting in the brush editor
  • Lukáš worked like a beaver on the G’Mic plugin and updated it to the latest version, added proper layer synchronization — and, tada! — on Linux/X11 enabled the interactive filter feature. Interactive Colorize now works on Linux, from within Krita. Press Space for the best effect!
  • Sven and Stefano fixed a crash when opening one image after closing another and changing the OpenGL settings in between.
  • Timothée Giet updated all color-smudge brush presets
  • Dmitry fixed a crash in the duplicate brush engine when using a tablet device
  • Sven fixed loading brush presets from bundles

And that ties in with a lot of work Wolthera has done. Building on Victor Lafon’s work earlier this year to make it possible to package resources — brushes, gradients, patterns, everything — in bundles, she’s hit her stride and figured out what was broken and what needs to be added. But… It’s already possible to load a bundle into Krita, and use the resources!

And here’s the first bundle, with inking brushes! If you want to get into the game, remember, this is still barely ripe. Also check the brush preset guidelines!


There are still  230 bugs at the moment, down from 234, even though we fixed more than thirty bugs! Some of these bugs might cause dataloss, so users are recommended to use the beta builds for testing, not for critical work!

For Linux users, Krita Lime will be updated on Monday. Remember that launchpad is very strict about the versions of Ubuntu it supports. So the update is only available for 14.04 and up.

OpenSUSE users can use the new OBS repositories created by Leinir:

Windows users can choose between an installer and the zip file. You can unzip the zip file anywhere and start Krita by executing bin/krita.exe: this only works if you have the right Microsoft Visual Studio C runtime library. Otherwise, use the MSI file. We only have 64 bits Windows builds at the moment, we’re working on fixing a problem with the 32 bits build.

OSX users can open the dmg and copy krita.app where they want. Note that OSX still is not supported. There are OSX-specific bugs and some features are missing.

Categories: FLOSS Project Planets

Russ Allbery: Review: 2014 Hugos: Novelettes

Planet Debian - Sun, 2014-12-21 13:15

Review: 2014 Hugos: Novelettes, edited by Loncon 3

Publisher: Loncon 3 Copyright: 2014 Format: Kindle

This is another weird "book review" covering the Hugo-nominated novelettes for the 2014 Hugos (given for works published in 2014) at Loncon 3, the 2014 Worldcon. The "editor" is the pool of attendees and supporting members who chose to nominate works, all of which had been previously edited by other editors in their original publication. I received all of these as part of the Hugo voter's packet for being a supporting member, but they all appear to be available for free on the Internet (at least at the time of this writing).

"The Exchange Officers" by Brad Torgersen: An okay, if not particularly ground-breaking, military SF story, ruined for me by the ham-handed introduction of superficial jingoism. The protagonists are doing a tour as remote operators of humanoid battle suits in orbit: not a new premise, but a servicable one. Since this is military SF, they predictably have to defend a space installation against attackers. So we get a bit of drama, a bit of zero-g combat, and the fun of people learning how to remotely operate suits. You've probably read this before, but it passes the time reasonably well.

Unfortunately, Torgersen decided to make the villains the Chinese military for no adequately-explained reason. (Well, I'm being kind; I suspect the reason is the standard yellow peril nonsense, but that's less generous.) So there is snide commentary about how only the military understand the Chinese threat and a fair bit of old-fashioned jingoism mixed in to the story, to its detriment.

If you like this sort of thing, it's a typical example, although it escapes me why people thought it was exceptional enough to warrant nomination. (5)

"The Lady Astronaut of Mars" by Mary Robinette Kowal: Once again, my clear favorite among the stories also won, which is a lovely pattern.

Elma was the female astronaut in an alternate history in which manned space exploration continued to grow, leading to permanent settlement on Mars. She spent lots of time being photographed, being the smiling face of the space program, while her husband worked on the math and engineering of the launches. Now, she's an old woman, taking care of her failing and frail husband, her career behind her. Or so she thinks, before an offer that forces an impossible choice between space and staying with her husband for his final days.

This is indeed the tear-jerker that it sounds like, but it's not as maudlin as it might sound. Kowal does an excellent job with Elma's characterization: she's no-nonsense, old enough to be confident in her opinions, and knows how to navigate through the world. The story is mixed with nostalgia and memories, including a reminder of just what Elma meant to others. It touches on heroism, symbolism, and the horrible choices around dying loved ones, but I thought it did so deftly and with grace. I was expecting the story to be too obvious, but I found I enjoyed the quitodian feel. It's not a story to read if you want to be surprised, but I loved the small touches. (9)

"Opera Vita Aeterna" by Vox Day: Before the review, a note that I consider obligatory. The author of this story is an aggressively misogynistic white supremacist, well-known online for referring to black people as savages and arguing women should not be allowed to vote. To what extent you choose to take that into account when judging his fiction is up to you, but I don't think it should go unsaid.

"Opera Vita Aeterna" is the story of a monestary in a typical fantasy world (at least as far as one can tell from this story; readers of Vox Day's fantasy series will probably know more background). At the start of the story, it gets an unexpected visit from an elf. Not just any elf, either, but one of the most powerful magicians of his society. He comes to the monestary out of curiousity about the god that the monks worship and stays for a project of illuminating their scriptures, while having theological debates with the abbot.

This story is certainly not the offensive tirade that you might expect from its author. Its biggest problem is that nothing of substance happens in the story, either theologically or via more conventional action. It's a lot of description, a lot of talking, a lot of warmed-over Christian apologetics that dodges most of the hard problems, and a lot of assertions that the elf finds something of interest in this monestary. I can believe this could be the case, but Vox Day doesn't really show why. There is, at the end of the story, some actual drama, but I found it disappointing and pointless. It leads nowhere. The theology has the same problem: elves supposedly have no souls, which is should be the heart of a theological question or conflict Vox Day is constructing, but that conflict dies without any resolution. We know nothing more about the theology of this world at the end of the story than we do at the beginning.

Some of the descriptions here aren't bad, and the atmosphere seems to want to develop into a story. But that development never happens, leaving the whole work feeling fundamentally pointless. (4)

"The Truth of Fact, the Truth of Feelng" by Ted Chiang: This is another oddly-constructed story, although I think a bit more successful. It's a story in two interwoven parts. One is a fictional essay, told in a non-fiction style, about a man living in a future world with ubiquitous life recording and very efficient search software. Any part of one's life can be easily found and reviewed. The other is the story of a boy from a tribal culture during European colonialism. He learns to read and write, and from that a respect for written records, which come into conflict with the stories that the tribe elders tell about the past.

The purpose of both of these stories is to question both the value and the implications of recording everything in a way that preserves and guarantees the facts instead of individual interpretations. The boy's story calls this into question; the narrator's story offers ambiguous support for its value and a deeper plea for giving people space to change.

I found the style a bit difficult to get used to, since much of it did not feel like a story. But it grew on me as I read it, and the questions Chiang raises have stuck with me since. The problem of how and when to allow for change in others when we have perfect (or at least vastly improved) memory is both important and complicated, and this is one of the better presentations of the problem that I've seen. It's more of a think-y piece, and closer to non-fiction than a story, but I thought it was worth reading. (8)

"The Waiting Stars" by Aliette de Bodard: I keep wanting to like de Bodard's space opera world of AIs and living ships, but it never quite works for me. I've read several stories set in this universe now, and it has some neat ideas, but I always struggle with the characters. This story at least doesn't have quite as much gruesome pregnancy as the previous ones (although there's still some).

"The Waiting Stars" opens with a raid on a ship graveyard, an attempt to rescue and "reboot" an AI starship under the guidance of another Mind. This is intermixed with another story about a woman who was apparently rescued in childhood from birthing ship Minds and raised in a sort of foster institution. This feels like a flashback at first, but its interaction with the rest of the story is something more complicated. The conceptual trick de Bodard pulls here is thought-provoking, but once again I struggled to care about all of the characters. I also found the ending discouraging and unsatisfying, which didn't help. Someone who isn't me might really like this story, but it wasn't my thing. (6)

Rating: 6 out of 10

Categories: FLOSS Project Planets

Python Diary: Introducing VGA Console for Pygame

Planet Python - Sun, 2014-12-21 13:09

I just finished my initial work on a VGA Console emulator for Pygame. This is great, if say you need to add a retro style VGA Console into your game, or if you want to build an application that takes full advantage of the VGA video adapter. At the moment, I am releasing it as a Preview Demo, and it isn't ready for any production applications or games just yet. However, what is available in this preview is plentiful. You can download the full source from my Bitbucket page, which is linked in the right hand column of my blog. I will also include the source here and try to explain what it can currently so. I have also enclosed a screenshot of the demo app both in this blog post and in the bitbucket repository for those too lazy to download it and try it themselves in Pygame.

import pygame, sys from pygame.locals import * import mmap clock = pygame.time.Clock() class VGAConsole(object): VGA_PALETTE = ( (0,0,0), (0,0,168), (0,168,0), (0,168,168), (168,0,0), (168,0,168), (168,87,0), (168,168,168), (87,87,87), (87,87,255), (87,255,87), (87,255,255), (255,87,87), (255,87,255), (255,255,87), (255,255,255), ) US_SHIFTMAP = { 49: 33, 50: 64, 51: 35, 52: 36, 53: 37, 54: 94, 55: 38, 56: 42, 57: 40, 48: 41, 96: 126, 45: 95, 61: 43, 91: 123, 93: 125, 59: 58, 39: 34, 92: 124, 44: 60, 46: 62, 47: 63, } def __init__(self): self.vgabuf = mmap.mmap(-1, 4000) pygame.display.init() pygame.font.init() self.screen = pygame.display.set_mode((640,400),0,8) pygame.display.set_caption('VGA Console test') self.font = pygame.font.Font('VGA.ttf', 16) self.cursor = self.font.render(chr(219),0,self.VGA_PALETTE[7]) pygame.mouse.set_visible(False) self.pos = [0,0] self.background = self.VGA_PALETTE[1] self.shift = False self.cframe = 0 self.cframes = [] for c in ('|', '/', '-', '\\',): self.cframes.append(self.font.render(c,0,self.VGA_PALETTE[15],self.background)) def draw(self): self.screen.fill(self.background) self.vgabuf.seek(0) for y in range(0,25): for x in range(0,80): attr = ord(self.vgabuf.read_byte()) fg,bg = 7,0 if attr > 0: fg,bg = attr&0xf, (attr&0xf0)>>4 c = self.vgabuf.read_byte() if ord(c) > 0: self.screen.blit(self.font.render(c,0,self.VGA_PALETTE[fg],self.VGA_PALETTE[bg]), (x*8,y*16)) self.drawMouse() self.drawCursor() pygame.display.update() def setXY(self, row, col, c): self.vgabuf.seek((80*row+col)*2) self.vgabuf.write(chr(0x1f)+chr(c)) def type(self, c): if c == 13: self.pos[1] = 0 self.pos[0] +=1 elif c == 8: if self.pos[1] > 0: self.pos[1] -=1 self.setXY(self.pos[0], self.pos[1], 0) elif c == 9: self.pos[1] += 8 elif c == 27: pygame.quit() sys.exit() else: self.setXY(self.pos[0], self.pos[1], c) self.pos[1] +=1 if self.pos[1] > 80: self.pos[1] = 0 self.pos[0] += 1 def write(self, text): for c in text: self.type(ord(c)) def draw_ascii(self): row, col = 10,10 for c in range(0,255): self.setXY(row,col,c) col +=1 if col > 41: col = 10 row+=1 def draw_window(self, row, col, height, width, title=None): self.setPos(row, col) brd = chr(205)*(width-1) self.write(chr(213)+brd+chr(184)) for y in range(row+1, row+height): self.setXY(y, col, 179) self.setXY(y, col+width, 179) self.setPos(row+height, col) self.write(chr(212)+brd+chr(190)) if title: self.setPos(row, col+((width/2)-len(title)/2)) self.write(title) def clear_window(self, row, col, height, width): for y in range(row, row+height+1): self.setPos(y, col) self.write(chr(0)*(width+1)) def setPos(self, row, col): self.pos = [row, col] def clearScreen(self): self.vgabuf.seek(0) self.vgabuf.write(chr(0)*4000) self.setPos(0, 0) def mousePos(self): x,y = pygame.mouse.get_pos() return (y/16, x/8) def drawMouse(self): row,col = self.mousePos() self.screen.blit(self.cursor, (col*8,row*16)) def drawCursor(self): self.screen.blit(self.cframes[self.cframe/3%4], (self.pos[1]*8,self.pos[0]*16)) self.cframe+=1 def main(self): self.draw_ascii() self.draw_window(9,9,9,33, ' ASCII ') self.setPos(0, 0) self.write('Welcome to VGAConsole!\rC:\>') self.draw() #pygame.event.set_blocked(MOUSEMOTION) while 1: clock.tick(30) for event in pygame.event.get(): if event.type == QUIT: pygame.quit() sys.exit() elif event.type == MOUSEBUTTONUP: oldpos = self.pos self.clear_window(9, 9, 9, 33) self.pos = oldpos elif event.type == KEYDOWN: if event.key == K_LSHIFT or event.key == K_RSHIFT: self.shift = True if event.key > 0 and event.key < 256: c = event.key if self.shift: if c > 96 and c < 123: c-=32 elif c in self.US_SHIFTMAP.keys(): c = self.US_SHIFTMAP[c] self.type(c) elif event.type == KEYUP: if event.key == K_LSHIFT or event.key == K_RSHIFT: self.shift = False self.draw() VGAConsole().main()

As you can see from the screenshot, it's very complete looking, with full support for all those ASCII characters you knew and loved. The display itself is rendered from the same format as the VGA memory, where each character on screen takes 2 bytes. One byte stores some attribute information, such as the foreground and background colors, and the other byte is the ASCII character code. In the actual VGA memory, the attribute also supported a blink flag, which I omitted in this VGA Console, as it's annoying and I really don't want to see it used... So, rather than supporting blink, it supports a full 16 colors for both the foreground and the background. In the original VGA memory, it only supported 8-color backgrounds. Since the memory format for VGA is the same as located at &0xB800 memory segment on real hardware, you can technically use a memory dump file created by say BSAVE commands. I may enable native BSAVEing and BLOADing of the memory buffer in future version.

During the draw() function, the VGA memory space is read and a surface is blitted. Currently, this surface is a display Surface object, but in the next release this will change to render to a normal Surface so that it can be blitted by the developer onto whatever Surface or display they want. This may cause issues with the mouse emulation however, I will need to look into that and see how the Pygame mouse works with Surfaces. So, yes, mouse emulation is also present. It displays a familiar block cursor we all used to remember. There is an API to grab it's location on the text display, so that you can determine if a text object or widget has been clicked.

One of the major features which is missing is input and output buffering support. At the moment, the current API allows you to write directly to the VGA memory buffer to draw text onto the console. This isn't very ideal for several reasons. It will not work correctly with stdin/stdout applications, and taking input into a string isn't currently possible. Typing onto the console is currently possible, but the input is not buffered, and the user may edit anything on the screen. Proper stdin and stdout is supported in the next release. This will enable many standard Python functions which require either or both to actually work on the VGA console. This will really open the doors to what will be possible with VGA Console.

Categories: FLOSS Project Planets

Dirk Eddelbuettel: If only there was a Romeo somewhere ...

Planet Debian - Sun, 2014-12-21 12:04

Attention: rant coming. You have been warned, and may want to tune out now.

So the top of my Twitter timeline just had a re-tweet posting to this marvel on the state of Julia. I should have known better than to glance at it as it comes from someone providing (as per the side-bar) Thought leadership in Big Data, systems architecture and more. Reading something like this violates the first rule of airport book stores: never touch anything from the business school section, especially on (wait for it) leadership or, worse yet, thought leadership.

But it is Sunday, my first cup of coffee still warm (after finalising two R package updates on GitHub, and one upload to CRAN) and so I read on. Only to be mildly appalled by the usual comparison to R based on the same old Fibonacci sequence.

Look, I am as guilty as anyone of using it (for example all over chapter one of my Rcpp book), but at least I try to stress each and every time that this is kicking R where it is down as its (fairly) poor performance on functions calls (that is well-known and documented) obviously gets aggravated by recursive calls. But hey, for the record, let me restate those results. So Julia beats R by a factor of 385. But let's take a closer look.

For n=25, I get R to take 241 milliseconds---as opposed to his 6905 milliseconds---simply by using the same function I use in every workshop, eg last used at Penn in November, which does not use the dreaded ifelse operator:

fibR <- function(n) { if (n < 2) return(n) return(f(n-1) + f(n-2)) }

Switching that to the standard C++ three-liner using Rcpp

library(Rcpp) cppFunction('int fibCpp(int n) { if (n < 2) return(n); return(fibCpp(n-1) + fibCpp(n-2)); }')

and running a standard benchmark suite gets us the usual result of

R> library(rbenchmark) R> benchmark(fibR(25),fibCpp(25),order="relative")[,1:4] test replications elapsed relative 2 fibCpp(25) 100 0.048 1.000 1 fibR(25) 100 24.674 514.042 R>

So for the record as we need this later: that is 48 milliseconds for 100 replications, or about 0.48 milliseconds per run.

Now Julia. And of my standard Ubuntu server running the current release 14.10:

edd@max:~$ julia ERROR: could not open file /home/edd//home/edd//etc/julia/juliarc.jl in include at boot.jl:238 edd@max:~$

So wait, what? You guys can't even ensure a working release on what is probably the most popular and common Linux installation? And I get to that after reading a post on the importance of "Community, Community, Community" and you can't even make sure this works on Ubuntu? Really?

So a little bit of googling later, I see that julia -f is my friend for this flawed release, and I can try to replicate the original timing

edd@max:~$ julia -f _ _ _ _(_)_ | A fresh approach to technical computing (_) | (_) (_) | Documentation: http://docs.julialang.org _ _ _| |_ __ _ | Type "help()" to list help topics | | | | | | |/ _` | | | | |_| | | | (_| | | Version 0.2.1 (2014-02-11 06:30 UTC) _/ |\__'_|_|_|\__'_| | |__/ | x86_64-linux-gnu julia> fib(n) = n < 2 ? n : fib(n - 1) + fib(n - 2) fib (generic function with 1 method) julia> @elapsed fib(25) 0.002299559 julia>

Interestingly the posts author claims 18 milliseconds. I see 2.3 milliseconds here. Maybe someone is having a hard time comparing things to the right of the decimal point. Or maybe his computer is an order of magnitude slower than mine. The more important thing is that Julia is of course faster than R (no surprise: LLVM at work) but also still a lot slower than a (trivial to write and deploy) C++ function. Nothing new here.

So let's recap. Comparison to R was based on a flawed version of a function we only use when we deliberately want to put R down, can be improved significantly when using a better implementation, results are still off by order of magnitude to what was reported ("math is hard"), and the standard C / C++ way of doing things is still several times faster than our new saviour language---which I can't even launch on the current version of one of the more common free operating systems. Ok then. Someone please wake me up in a few years and I will try again.

Now, coming to the end of the rant I should really stress that of course I too hope that Julia succeeds. Every user pulled away from Matlab is a win for all us. We're in this together and the endless navel gazing between ourselves is so tiresome and irrelevant. And as I argue here, even more so when we among ourselves stick to unfair comparisons as well as badly chosen implementation details.

What matters are wins against the likes of Matlab, Excel, SAS and so on. Let's build on our joint strength. I am sure I will use Julia one day, and I am grateful for everyone helping with it---as a lot of help seems to be needed. In the meantime, and with CRAN at 6130 packages that just work I'll continue to make use of this amazing community and trying my bit to help it grow and prosper. As part of our joint community.

Categories: FLOSS Project Planets

Garvit Khatri (garvitdelhi)

Planet KDE - Sun, 2014-12-21 04:50
This month so far was full of shocking news. My mentor Anuj pahuja informed me that someone has already ported app kturtle which i had to port during the SoK duration. He said that this work was done when i was busy in my annual examination. But i thank alot to my mentor that he stood with me and helped me get another app so that i can port it. So now i am working on porting app KNetWalk that belong to Kde Games.

So far i have ported the build system and it has already bean pushed to the frameworks branch of KNetWalk. You can have a look at my progress at http://quickgit.kde.org/?p=knetwalk.git on frameworks branch.

App is able to build and install successfully on kf5 on my local system. I will push other changes soon.

Now I am looking forward to port ui as now app is crashing because ui is not ported.

Cmake Log:

garvit@beast:~/dev/sok/knetwalk/build$ cmake ../
-- The C compiler identification is GNU 4.9.1
-- The CXX compiler identification is GNU 4.9.1
-- Check for working C compiler: /usr/bin/cc
-- Check for working C compiler: /usr/bin/cc -- works
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++
-- Check for working CXX compiler: /usr/bin/c++ -- works
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Found KF5CoreAddons: /usr/lib/x86_64-linux-gnu/cmake/KF5CoreAddons/KF5CoreAddonsConfig.cmake (found version "5.3.0")
-- Found KF5Config: /usr/lib/x86_64-linux-gnu/cmake/KF5Config/KF5ConfigConfig.cmake (found version "5.3.0")
-- Found KF5ItemModels: /usr/lib/x86_64-linux-gnu/cmake/KF5ItemModels/KF5ItemModelsConfig.cmake (found version "5.3.0")
-- Found KF5WidgetsAddons: /usr/lib/x86_64-linux-gnu/cmake/KF5WidgetsAddons/KF5WidgetsAddonsConfig.cmake (found version "5.3.0")
-- Found KF5WindowSystem: /usr/lib/x86_64-linux-gnu/cmake/KF5WindowSystem/KF5WindowSystemConfig.cmake (found version "5.3.0")
-- Found KF5Codecs: /usr/lib/x86_64-linux-gnu/cmake/KF5Codecs/KF5CodecsConfig.cmake (found version "5.3.0")
-- Found KF5Archive: /usr/lib/x86_64-linux-gnu/cmake/KF5Archive/KF5ArchiveConfig.cmake (found version "5.3.0")
-- Found KF5DBusAddons: /usr/lib/x86_64-linux-gnu/cmake/KF5DBusAddons/KF5DBusAddonsConfig.cmake (found version "5.3.0")
-- Found KF5DNSSD: /usr/lib/x86_64-linux-gnu/cmake/KF5DNSSD/KF5DNSSDConfig.cmake (found version "5.3.0")
-- Found Gettext: /usr/bin/msgmerge (found version "0.19.2")
-- Found PythonInterp: /usr/bin/python (found version "2.7.8")
-- Found KF5Declarative: /usr/lib/x86_64-linux-gnu/cmake/KF5Declarative/KF5DeclarativeConfig.cmake (found version "5.3.0")
-- Found KF5I18n: /usr/lib/x86_64-linux-gnu/cmake/KF5I18n/KF5I18nConfig.cmake (found version "5.3.0")
-- Found KF5GuiAddons: /usr/lib/x86_64-linux-gnu/cmake/KF5GuiAddons/KF5GuiAddonsConfig.cmake (found version "5.3.0")
-- Found KF5Service: /usr/lib/x86_64-linux-gnu/cmake/KF5Service/KF5ServiceConfig.cmake (found version "5.3.0")
-- Found KF5ConfigWidgets: /usr/lib/x86_64-linux-gnu/cmake/KF5ConfigWidgets/KF5ConfigWidgetsConfig.cmake (found version "5.3.0")
-- Found KF5ItemViews: /usr/lib/x86_64-linux-gnu/cmake/KF5ItemViews/KF5ItemViewsConfig.cmake (found version "5.3.0")
-- Found KF5IconThemes: /usr/lib/x86_64-linux-gnu/cmake/KF5IconThemes/KF5IconThemesConfig.cmake (found version "5.3.0")
-- Found KF5Completion: /usr/lib/x86_64-linux-gnu/cmake/KF5Completion/KF5CompletionConfig.cmake (found version "5.3.0")
-- Found KF5JobWidgets: /usr/lib/x86_64-linux-gnu/cmake/KF5JobWidgets/KF5JobWidgetsConfig.cmake (found version "5.3.0")
-- Found KF5TextWidgets: /usr/lib/x86_64-linux-gnu/cmake/KF5TextWidgets/KF5TextWidgetsConfig.cmake (found version "5.3.0")
-- Found KF5GlobalAccel: /usr/lib/x86_64-linux-gnu/cmake/KF5GlobalAccel/KF5GlobalAccelConfig.cmake (found version "5.3.0")
-- Found KF5XmlGui: /usr/lib/x86_64-linux-gnu/cmake/KF5XmlGui/KF5XmlGuiConfig.cmake (found version "5.3.0")
-- Found KF5Crash: /usr/lib/x86_64-linux-gnu/cmake/KF5Crash/KF5CrashConfig.cmake (found version "5.3.0")
-- Found KF5Bookmarks: /usr/lib/x86_64-linux-gnu/cmake/KF5Bookmarks/KF5BookmarksConfig.cmake (found version "5.3.0")
-- Found KF5KIO: /usr/lib/x86_64-linux-gnu/cmake/KF5KIO/KF5KIOConfig.cmake (found version "5.3.0")
-- Found KF5NotifyConfig: /usr/lib/x86_64-linux-gnu/cmake/KF5NotifyConfig/KF5NotifyConfigConfig.cmake (found version "5.3.0")
-- Found KF5NewStuff: /usr/lib/x86_64-linux-gnu/cmake/KF5NewStuff/KF5NewStuffConfig.cmake (found version "5.3.0")
-- Found KF5KDELibs4Support: /usr/lib/x86_64-linux-gnu/cmake/KF5KDELibs4Support/KF5KDELibs4SupportConfig.cmake (found version "5.3.0")
-- Found KF5: success (found version "5.3.0") found components:  CoreAddons Config ItemModels WidgetsAddons WindowSystem Codecs Archive Config DBusAddons DNSSD Declarative I18n GuiAddons Service ConfigWidgets ItemViews IconThemes Completion JobWidgets TextWidgets GlobalAccel XmlGui Crash Bookmarks KIO NotifyConfig NewStuff KDELibs4Support
-- Looking for __GLIBC__
-- Looking for __GLIBC__ - found
-- Performing Test _OFFT_IS_64BIT
-- Performing Test _OFFT_IS_64BIT - Success
-- Configuring done
-- Generating done
-- Build files have been written to: /home/garvit/dev/sok/knetwalk/build

Make Log can be find here : https://gist.github.com/garvitdelhi/0e21a095dcfc8cfef170
Categories: FLOSS Project Planets

SAR GUI: Export sar / sysstat reports as PDF using kSar

LinuxPlanet - Sat, 2014-12-20 23:54
Hello, If you are Linux Administrator you must know what SAR is, sar is a very useful utility for Linux Administrators to get the report of CPU Usage and You can monitor I/O, CPU Usage, Idle system state using sar utility. This article will help you to read / export sar reports in Graphical mode. […]
Categories: FLOSS Project Planets

Steve McIntyre: UEFI Debian installer work for Jessie, part 2

Planet Debian - Sat, 2014-12-20 20:51

A month ago , I wrote about my plans for improved (U)EFI support in Jessie. It's about time I gave an update on progress!

I spoke about adding support for installing grub-efi into the removable media path (#746662). That went into Debian's grub packages already, but there were a couple of bugs. First of all, the code could end up prompting people about EFI questions even when they didn't have any grub-efi packages installed. Doh! (#773004). Then there was an unexpected bug with case-insensitive file name handling on FAT/VFAT filesystems (#773092). I've posted (and tested!) patches to fix both, hopefully in an upload any day now.

Next, I mentioned getting i386 UEFI support going again. This is a major feature that a lot of people have been asking for. It's also going to involve quite a bit of effort...

Our existing (amd64) UEFI-capable images in Debian use the standard x86 El Torito CD boot method, with two boot images provided. One of these images gives us the traditional isolinux BIOS-boot support. The second option is an alternate El Torito image, including at a 64-bit version of grub-efi. For most machines, this works just fine - the BIOS or UEFI firmware will automatically pick the correct image and everybody's happy. This even works on our multi-arch i386/amd64 CDs and DVDs - isolinux will boot either kernel from the first El Torito image, or the alternate UEFI image is amd64 only.

However, I can now see that there's been a long-standing issue with those multi-arch images, and it's to do with Macs. On the advice of Matthew Garrett, I've borrowed an old 32-bit Intel Mac to help testing, and it's quite instructive in terms of buggy firmware! The firmware on older 32-bit Intel Macs crashes hard when it detects more than one El Torito boot image, and I've now seen this happen myself. I've not had any bug reports about this, so I can only assume that we haven't had many users try that image. As far as I can tell, they've been using the normal i386 images in BIOS boot mode, and then struggling to get bootloaders working afterwards. There are a number of different posts on the net explaining how to do that. That's OK, but...

If I now start adding 32-bit UEFI support to our standard set of i386 images, this will prevent users of old Macs from installing Debian. I could just say "screw it" and decide to not support those users at all, but that's not a very nice thing to do. If we want to continue to support them and add 32-bit UEFI support, I'll have to add another flavour of i386 image, either a "Mac special" or a "32-bit UEFI special". I'm not keen on doing that if I could avoid it, but the two options are mutually exclusive. Given the Mac problem is only on older hardware which (hopefully!) will be dying out, I'll probably pick that one as the special-case CD, and I'll make an extra netinst flavour only for those users to boot off.

So, I've started playing with i386 UEFI stuff in the last couple of weeks too. I almost immediately found some showstopper behaviour bugs in the i386 versions of efivar and efibootmgr (#773412 and #773007), but I've debugged these with our Debian maintainer (Jared Dominguez) and the upstream developer (Peter Jones) and fixes should be in Jessie very soon.

As I mentioned last month, the machines that most people have been requesting support for are the latest Bay Trail-based laptops and tablets. There are using 64-bit Intel Atom CPUs, but crippled with 32-bit UEFI firmware with no BIOS compatibility mode. This makes for some interesting issues. It's probably impossible to get a true answer why these machines are so broken by design, but there are several rumours. As far as I can see, most of these machines seem to ship with a limited version of 32-bit Windows 8.1. 32-bit Windows is smaller than 64-bit Windows, so fits much better in limited on-board storage space. But 32-bit Windows won't boot from 64-bit UEFI, so the firmware needed buggering to match. Ugh!

To support these Bay Trail machines properly, we'll want to add a 32-bit UEFI installation option to our 64-bit images. I can tweak our CDs so that both 32-bit and 64-bit versions of grub-efi are included, and the on-board UEFI will load the right one needed. Then I'll need to make sure that all the 64-bit images also include grub-efi-ia32-bin from now on. With some extra logic, we'll need to remember that these new machines need that package installing instead of grub-efi-amd64-bin. It shouldn't be too hard, but let's see! :-)

So, I've been out and bought one of these machines, an Asus X205TA. Lucas agreed that Debian will reimburse me (thanks!), so I'm not stuck with spending my own money on an otherwise unwanted machine! I can see via Google that none of the mainstream Linux distros support the Bay Trail machines fully yet, so there's not a lot of documentation yet. Initial boot on the new machine was easy using a quick-hack i386 UEFI image on USB, but from there everything went downhill quickly. I'll need to investigate some more, but the laptop's own keyboard and trackpad are not detected by the installer system. Neither is its built-in WiFi. Yay! I had to go and dig out a USB hub to connect the installer image USB key, a keyboard, mouse and a USB hard drive to the machine, as it only has 2 USB ports. I've taken a complete backup of the on-board 32GB flash before I start experimenting, so I can restore the machine back to its virgin state for future testing.

I guess I now have a project to keep me busy over Christmas...!

In other news, we've been continuing work on UEFI support for and within the new arm64 port. My ARM/Linaro colleague Leif Lindholm has been back-porting upstream kernel features and bug fixes to make d-i work, and filing Debian bugs when silly things break on arm64 because people don't think about other architectures (e.g #773311, doh!). As there are more and more people interested in (U)EFI support these days, I've also proposed that we create a new debian-efi mailing list to help focus discussion. See #773327 and follow up there if you think you'd use the list too!

You can help! Same as 2 years ago, I'll need help testing some of these images. For the 32-bit UEFI support, I now have some relevant hardware myself, but testing on other machines too will be very important! I'll start pushing unofficial Jessie EFI test images shortly - watch this space.

Categories: FLOSS Project Planets

Calvin Spealman: Handmade Hero

Planet Python - Sat, 2014-12-20 18:07

Handmade hero looks like an amazing project.

If you're a long time game developer, new to it, or develop in some other discipline I think you have to respect the goals Casey has laid out for himself here. Developing an old style game from scratch live every weeknight is both a wonderful personal project and a beautiful piece of art.

Check it out!
Categories: FLOSS Project Planets

Plasma - Calling Qt 5.4 Testers

Planet KDE - Sat, 2014-12-20 17:48

Plasma 5 pushes QtQuick to the limits. It sounds like a cheesy marketing line, but it's true. Unfortunately this isn't a good thing. Although Plasma 5.1 is somewhat stable we have had some crashers, and whilst we've worked hard to fix the ones that are ours a sizeable number of these were caused by problems deep inside Qt.

Fortunately Qt 5.4 has just been released. It contains countless bug fixes that really improve the situation; several of which were even written by me and the rest of the Plasma crew.

We need people with Qt 5.4 (hey all you Arch people) to help go through all open crash reports and test if they still happen since upgrading.

I don't like closing bugs wtih a vague "it seems to work for me" without getting at least a second opinion, I may be overlook something and it's not fair on the original reporter.

I've added a quick link to the high priority candidates

And feel free to help go through the rest of our list

So far everything is looking very positive towards having a completely rock solid Plasma 5.2 in January; lets make it happen.

Categories: FLOSS Project Planets

Ionel Cristian: Compiling Python extensions on Windows

Planet Python - Sat, 2014-12-20 17:00
For Python 2.7*

For Python 2.7 you need to get Microsoft Visual C++ Compiler for Python 2.7. It's a special package made by Microsoft that has all the stuff. It is supported since setuptools 6.0 [1].

Unfortunately the latest virtualenv, 1.11.6 as of now, still bundles setuptools 3.6. This means that if you try to run python setup.py build_ext in an virtualenv it will fail, because setuptools can't detect the compiler. The solution is to force upgrade setuptools, example: pip install "setuptools>=6.0".

If you're using tox then just add it to your deps. Example:

[testenv] deps = setuptools>=6.0

This seems to work fine for 64bit extensions.


Probably works for Python 3.3 too.

For Python 3.4*

This one gave me a headache. I've tried to follow this guide but had some problems with getting 64bit extensions to compile. In order to get it to work you need to jump through these hoops:

  1. Install Visual C++ 2010 Express.

  2. Install Windows SDK for Visual Studio 2010 (also know as the Windows SDK v7.1). This is required for 64bit extensions.

    Before installing the Windows SDK v7.1 (these gave me a bad time):

    • Do not install Microsoft Visual Studio 2010 Service Pack 1 yet. If you did then you have to reinstall everything. Uninstalling the Service Pack removes components so you have to reinstall Visual C++ 2010 Express again.
    • Remove all the Microsoft Visual C++ 2010 Redistributable packages from Control Panel\Programs and Features.

    If you don't do those then the install is going to fail with an obscure "Fatal error during installation" error.

  3. Create a vcvar64.bat file in C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64 that contains [2]:

    CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64

    If you don't do this you're going to get a mind-boggling error like this:

    Installing collected packages: whatever Running setup.py develop for whatever building 'whatever.cext' extension Traceback (most recent call last): File "<string>", line 1, in <module> File "C:/whatever\setup.py", line 112, in <module> for root, _, _ in os.walk("src") File "c:\python34\Lib\distutils\core.py", line 148, in setup dist.run_commands() File "c:\python34\Lib\distutils\dist.py", line 955, in run_commands self.run_command(cmd) File "c:\python34\Lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:\whatever\.tox\3.4\lib\site-packages\setuptools\command\develop.py", line 32, in run self.install_for_development() File "C:\whatever\.tox\3.4\lib\site-packages\setuptools\command\develop.py", line 117, in install_for_development self.run_command('build_ext') File "c:\python34\Lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "c:\python34\Lib\distutils\dist.py", line 974, in run_command cmd_obj.run() File "C:/whatever\setup.py", line 32, in run build_ext.run(self) File "C:\whatever\.tox\3.4\lib\site-packages\setuptools\command\build_ext.py", line 54, in run _build_ext.run(self) File "c:\python34\Lib\distutils\command\build_ext.py", line 339, in run self.build_extensions() File "c:\python34\Lib\distutils\command\build_ext.py", line 448, in build_extensions self.build_extension(ext) File "C:/whatever\setup.py", line 39, in build_extension build_ext.build_extension(self, ext) File "C:\whatever\.tox\3.4\lib\site-packages\setuptools\command\build_ext.py", line 187, in build_extension _build_ext.build_extension(self, ext) File "c:\python34\Lib\distutils\command\build_ext.py", line 503, in build_extension depends=ext.depends) File "c:\python34\Lib\distutils\msvc9compiler.py", line 460, in compile self.initialize() File "c:\python34\Lib\distutils\msvc9compiler.py", line 371, in initialize vc_env = query_vcvarsall(VERSION, plat_spec) File "C:\whatever\.tox\3.4\lib\site-packages\setuptools\msvc9_support.py", line 52, in query_vcvarsall return unpatched['query_vcvarsall'](version, *args, **kwargs) File "c:\python34\Lib\distutils\msvc9compiler.py", line 287, in query_vcvarsall raise ValueError(str(list(result.keys()))) ValueError: ['path'] Complete output from command C:\whatever\.tox\3.4\Scripts\python.exe -c "import setuptools, tokenize; __file__='C:/whatever\\setup.py'; exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" develop --no-deps:

    msvc9_support.py will run vcvarsall.bat amd64:

    c:\Program Files (x86)\Microsoft Visual Studio 10.0\VC>vcvarsall.bat amd64 The specified configuration type is missing. The tools for the configuration might not be installed.

    Basically that is caused by vcvarsall.bat not being able to run vcvar64.bat because, surprise, the Windows SDK is missing that file.

  4. Now everything should work, go and try py -3 setup.py clean --all build_ext --force.

  5. Install Microsoft Visual Studio 2010 Service Pack 1. This is optional, however, if you do this you also have to do the following too:

  6. Install Microsoft Visual C++ 2010 Service Pack 1 Compiler Update for the Windows SDK 7.1.

[1]Support added in https://bitbucket.org/pypa/setuptools/issue/258 [2]See: http://stackoverflow.com/a/26513378
Categories: FLOSS Project Planets

Gregor Herrmann: GDAC 2014/20

Planet Debian - Sat, 2014-12-20 16:56

today seen on IRC: a maintainer was surprised & happy that their package had migrated to testing without having filed an unblock request. once again an example of the awesome work of the release team which pro-actively unblocked the package. – a big thank you to the members of the release team!

this posting is part of GDAC (gregoa's debian advent calendar), a project to show the bright side of debian & why it's fun for me to contribute.

Categories: FLOSS Project Planets

Invent with Python: Translate Your Python 3 Program with the gettext Module

Planet Python - Sat, 2014-12-20 16:24

You've written a Python 3 program and want to make it available in other languages. You could duplicate the entire code-base, then go painstakingly through each .py file and replace any text strings you find. But this would mean you have two separate copies of your code, which doubles your workload every time you need to make a change or fix a bug. And if you want your program in other languages, it gets even worse.

Fortunately, Python provides a solution with the gettext module.

A Hack Solution

You could hack together your own solution. For example, you could replace every string in your program with a function call (with the function name being something simple, like _())) which will return the string translated into the correct language. For example, if your program was:

print('Hello world!')

...you could change this to:

print(_('Hello world!'))

...and the _() function could return the translation for 'Hello world!' based on what language setting the program had. For example, if the language setting was stored in a global variable named LANGUAGE, the _() function could look like this:

def _(s): spanishStrings = {'Hello world!': 'Hola Mundo!'} frenchStrings = {'Hello world!': 'Bonjour le monde!'} germanStrings = {'Hello world!': 'Hallo Welt!'} if LANGUAGE == 'English': return s if LANGUAGE == 'Spanish': return spanishStrings[s] if LANGUAGE == 'French': return frenchStrings[s] if LANGUAGE == 'German': return germanStrings[s]

This would work, but you'd be reinventing the wheel. This is pretty much what Python's gettext module. gettext is a set of tools and file formats created in the early 1990s to standardize software internationalization (also called I18N). gettext was designed as a system for all programming languages, but we'll focus on Python in this article.

The Example Program

Say you have a simple "Guess the Number" game written in Python 3 that you want to translate. The source code to this program is here. There are four steps to internationalizing this program:

  1. Modify the .py file's source code so that the strings are passed to a function named _().
  2. Use the pygettext.py script that comes installed with Python to create a "pot" file from the source code.
  3. Use the free cross-platform Poedit software to create the .po and .mo files from the pot file.
  4. Modify your .py file's source code again to import the gettext module and set up the language setting.
Step 1: Add the _() Function

First, go through all of the strings in your program that will need to be translated and replace them with _() calls. The gettext system for Python uses _() as the generic name for getting the translated string since it is a short name.

Note that using string formatting instead of string concatenation will make your program easier to translate. For example, using string concatenation your program would have to look like this:

print('Good job, ' + myName + '! You guessed my number in ' + guessesTaken + ' guesses!') print(_('Good job, ') + myName + _('! You guessed my number in ') + guessesTaken + _(' guesses!'))

This results in three separate strings that need to be translated, as opposed to the single string needed in the string formatting approach:

print('Good job, %s! You guessed my number in %s guesses!' % (myName, guessesTaken)) print(_('Good job, %s! You guessed my number in %s guesses!') % (myName, guessesTaken))

When you've gone through the "Guess the Number" source code, it will look like this. You won't be able to run this program since the _() function is undefined. This change is just so that the pygettext.py script can find all the strings that need to be translated.

Step 2: Extract the Strings Using pygettext.py

In the Tools/i18n of your Python installation (C:\Python34\Tools\i18n on Windows) is the pygettext.py script. While the normal gettext unix command parse C/C++ source code for translatable strings and the xgettext unix command can parse other languages, pygettext.py knows how to parse Python source code. It will find all of these strings and produce a "pot" file.

On Windows, I've run this script like so:

C:\>py -3.4 C:\Python34\Tools\i18n\pygettext.py -d guess guess.py

This creates a pot file named guess.pot. This is just a normal plaintext file that lists all the translated strings it found in the source code by search for _() calls. You can view the guess.pot file here.

Step 3: Translate the Strings using Poedit

You could fill in the translation using a text editor, but the free Poedit software makes it easier. Download it from http://poedit.net. Select File > New from POT/PO file... and select your guess.po file.

Poedit will ask what language you want to translate the strings to. For this example, we'll use Spanish:

Then fill in the translations. (I'm using http://translate.google.com, so it probably sounds a bit odd to actual Spanish-speakers.)

And now save the file in it's gettext-formatted folder. Saving will create the .po file (a human-readable text file identical to the original .pot file, except with the Spanish translations) and a .mo file (a machine-readable version which the gettext module will read. These files have to be saved in a certain folder structure for gettext to be able to find them. They look like this (say I have "es" Spanish files and "de" German files):

./guess.py ./guess.pot ./locale/es/LC_MESSAGES/guess.mo ./locale/es/LC_MESSAGES/guess.po ./locale/de/LC_MESSAGES/guess.mo ./locale/de/LC_MESSAGES/guess.po

These two-character language names like "es" for Spanish and "de" for German are called ISO 639-1 codes and are standard abbreviations for languages. You don't have to use them, but it makes sense to follow that naming standard.

Step 4: Add gettext Code to Your Program

Now that you have the .mo file that contains the translations, modify your Python script to use it. Add the following to your program:

import gettext es = gettext.translation('guess', localedir='locale', languages=['es']) es.install()

The first argument 'guess' is the "domain", which basically means the "guess" part of the guess.mo filename. The localedir is the directory location of the locale folder you created. This can be either a relative or absolute path. The 'es' string describes the folder under the locale folder. The LC_MESSAGES folder is a standard name

The install() method will cause all the _() calls to return the Spanish translated string. If you want to go back to the original English, just assign a lambda function value to _ that returns the string it was passed:

import gettext es = gettext.translation('guess', localedir='locale', languages=['es']) print(_('Hello! What is your name?')) # prints Spanish _ = lambda s: s

print(_('Hello! What is your name?')) # prints English

You can view the translation-ready source code for the "Guess the Number". If you want to run this program, download and unzip this zip file with it's locale folders and .mo file set up.

Further Reading

I am by no means an expert on I18N or gettext, and please leave comments if I'm breaking any best practices in this tutorial. Most of the time your software will not switch languages while it's running, and instead read one of the LANGUAGE, LC_ALL, LC_MESSAGES, and LANG environment variables to figure out the locale of the computer it's running on. I'll update this tutorial as I learn more.

Categories: FLOSS Project Planets
Syndicate content