Planet KDE

Syndicate content
Planet KDE -
Updated: 1 day 15 hours ago

CMake and library properties

Wed, 2014-07-09 01:30

When writing libraries with CMake, you need to set a couple of properties, especially the VERSION and SOVERSION properties. For library libbar, it could look like:

set_property(TARGET bar PROPERTY VERSION “0.0.0″)
set_property(TARGET bar PROPERTY SOVERSION 0 )

This will give you a => => symlink chain with a SONAME of encoded into the library.

The SOVERSION target property controls the number in the middle part of the symlink chain as well as the numeric part of the SONAME encoded into the library. The VERSION target property controls the last part of the last element of the symlink chain

This also means that the first part of VERSION should match what you put in SOVERSION to avoid surprises for others and for the future you.

Both these properties control “Technical parts” and should be looked at from a technical perspective. They should not be used for the ‘version of the software’, but purely for the technical versioning of the library.

In the kdeexamples git repository, it is handled like this:


And a bit later:

set_target_properties(bar PROPERTIES VERSION ${BAR_VERSION}

which is a fine way to ensure that things actually matches.

Oh. And these components is not something that should be inherited from other external projects.

So people, please be careful to use these correct.

Categories: FLOSS Project Planets

Qt5/KDE Frameworks porting steps

Wed, 2014-07-09 00:42

As I said in my last post I would elaborate about how porting of libkeduvocdocument (name pending currently) from Qt4 and kdelibs4 to Qt5 and KDE Frameworks happened.

Commits can be seen here but it went like this:
1. Change CMakeLists.txt to look for frameworks and Qt5 packages.
2. Try to build, fix any errors. All while checking the Porting Notes.
3. Port away from deprecated methods.
4. Port away from kdelibs4support.

I forget which part of the above involved each of these, but this is what was changed:
Ported from KUrl to QUrl.
Ported from KStandardDirs to QStandardPaths
Ported from KGlobal::locale() to QLocale
Ported away from other deprecated methods and classes.

So rinse and repeat until it's in a state where you are happy with it.
Note that step 4 above isn't strictly necessary, and is similar to porting Qt4 applications away from Qt3Support (Some kde4 applicationss never were ported away from Qt3Support sadly...) Yes KMouth, I'm looking at you.

Categories: FLOSS Project Planets

GSOC 2014 KDE KPeople AddressBook Status

Wed, 2014-07-09 00:28

I am working on KPeople AddressBook for KDE in GSOC 2014.

The features of the applications are to

  1. Show basic details of a Person.
  2. Show Emails of a selected Person.
  3. Show notes for our contact or change them.
  4. Show Chats of a selected Person
  5. Show Events of a selected Person
  6. Show Photos(Currently for Facebook Contacts) of selected person
  7. Show Google Drive Files of a selected Person


Below are the Screenshots of the application showing current progress.

Details Page Look like this. Some details have been hidden for privacy reason.

Mails For a selected person. Double click on a mail it will open the mail in kmail shown below

Mail View

Creating and saving a note for a selected person. If some note already exist then you can edit and save to update the note

Changes in note is reflected in about page

Currently I am working on chats. It’s great to work with KDE Community they are always available to help you.

Tagged: gsoc, kde, kdepim, kpeople
Categories: FLOSS Project Planets

KDE/4.14 branch forked

Tue, 2014-07-08 19:05

KDE/4.14 branch forked, master is now open. Next applications release will be a kdelibs4 and KF5 mix!

Categories: FLOSS Project Planets

The one donation you will want to make today

Tue, 2014-07-08 18:26

In the first week of August 45 KDE people will meet at Randa in the Swiss mountains. They will spend a week of their free time and an uncountable amount of passion and dedication to work on free software. It needs money to bring them together and make the best out of their energy. You can help. We are running a campaign to make this happen. It ends today. Please donate now.

I had the opportunity to be at Randa in 2011. I have been at lots of KDE sprints over the years. Randa is one of the very special ones. There is an amazing level of energy, the buzzing atmosphere of getting things done, a deep sense of purpose. Randa is a good place to create free software.

Part of that is the environment. In the middle of the mountains with not much around than the impressive nature of the Swiss Alps, you feel physically focused on what's important. Everybody is living in the same house for a week, eating, sleeping, and hacking. There are no distractions, there is a quietness which is inspiring.

Another part is the deep commitment of Mario, the organizer of the meeting. He puts in a lot of personal energy. He even dragged in his family and friends. He equipped the house with WLAN. During the meeting he tries hard to create the best possible environment for all the volunteers who come to Randa, so they can focus on creating free software and all what is around that.

With this in place, magic happens. KDE Frameworks 5 was started at Randa. The famous tier diagram was created there. One of the projects I have been working on, Inqlude, originated there. Sebas came up with the name, the idea was discussed and prototyped, and on the train home I wrote the first version of the web site, inspired and motivated by the energy from the meeting. Lots of other good stuff originated from Randa.

All this is only possible with the help of all of you. Many people put in their passion, energy, vacation, free time. But it also needs money to bring people together who otherwise couldn't afford it. You can help with a donation.

Are you a KDE user? Do you use KDE software for work or leisure? You can help the community to sustain the development of the software you use. You can give something back with a donation.

Are you a KDE contributor or have been one in the past? You have experienced what a difference meetings such as the one at Randa can make. Maybe you have met your employer or your employees at a KDE sprint. Maybe you started as a student in the KDE community and now have found your dream job as a software developer. You know what it means to learn and grow in the KDE community. You can help, you can give back, you can contribute with a donation.

Do you care about freedom? You want to be in control of your software and your privacy. You want to be able to study your software to see what it does, be able to change it and help others by giving them the changes as well. KDE is committed to this for more than 17 years now, to protect the freedom of users and contributors, and give access to great technology to everybody. Eva Galperin said in her Akademy keynote last year: "You are the developers. You are our last and only hope. Save us". That's what meetings such as Randa help to do. You can support it with a donation.

Please donate now.

Categories: FLOSS Project Planets

Only 34 hours left...

Tue, 2014-07-08 13:57

Krita's 2014 crowdfunding campaign is nearly at the end! Only 30 or so hours left before it comes to a close. We're at nearly 700 backers and close to 19,000 euros! That's including the paypal backers who started donating after we reached the initial goal.

It's been an exhilarating ride! In these 29 days, thousands of people got to know Krita for the first time and so many decided that Krita is worth backing... With this result, we'll be able to make Krita 2.9 an absolutely awesome release! Already during the kickstarter campaign, Dmitry, Sven and the other Krita hackers have added a huge number of features and fixes to Krita!

Let's have a short overview of the most important highlights:

And of course there have been dozens and dozens of bugfixes. Eleven test builds for Windows and Linux and three for OSX! Tens of thousands of downloads... And that in 30 days minus 34 hours!


Let's make the most of the remaining hours! Dmitry is working hard improving performance right now, and reducing memory usage. Sven is fixing bugs right and left. Wolthera is working on a new color selector. Boud is working on portable Windows installers. And more is coming!

Categories: FLOSS Project Planets

QmlWeb is not dead

Tue, 2014-07-08 11:40

It’s been 363 days now, since I last blogged about QmlWeb here. I hope to find more time to develop and blog about it, but for now just a quick note to let you all know, QmlWeb is still alive. Development is moving slower than I would wish, but it is moving.

A minesweeper game running in the webbrowser using QmlWeb.

Maybe you can already guess, why I am writing this. Right. I plan to go to Randa in August, in order to continue feature development and for looking into some possible real-world applications of QmlWeb. And we are raising money for that. For me as a student if I can go there actually does depend on whether we gain enough money or not.

Tomorrow (2014-07-09) is the last day of the fundraising campaign, so if you are interested in supporting my work on QmlWeb, please click here and donate whatever you can and will, in order to make the Randa development sprint possible – every euro or dollar (or money of any other currency) does make a difference.

Thank you

About QmlWeb:

QML consists out of an easy declarative syntax plus JavaScript. QtQuick uses a scenegraph to render the UI. A webbrowser implements JavaScript and a scenegraph to render UIs. QmlWeb is the bridge between those worlds: It’s a library, purely written in JavaScript itself, that implements a full QML engine and QtQuick libraries and uses the browser’s technologies to execute JavaScript and render QtQuick-UIs into the webbrowser.

With property bindings and anchor layouting, it already brings most of the convinience of QML to the web development world. Nevertheless, it still has a long way to go, until it’s ready for production. With your help (may it be financial or by contributing) we are able to go that long way.


Categories: FLOSS Project Planets

KDE Telepathy for Plasma 5 - helping was never easier

Tue, 2014-07-08 08:57

The first release of Plasma 5 is after years of active development finally just around the corner. But where is KDE Telepathy for Plasma 5 standing you ask? Well, a bit behind.

We have started with porting and have the basic applets moreless ready to be used, but that's just a small part of the whole suite. The contact list, the chat application, the system integration module and all other parts needs to be ported to offer a good IM experience with Plasma 5. And we want to offer that.

In just 31 days there will be a Port-to-Frameworks sprint in the middle of the Swiss Alps where lots of code will be ported to Frameworks and Qt5 and we would like to add KDE Telepathy on the list of the apps being ported and really push the port forward during that week. Why there? The core of our KTp team will be there, people with great knowledge of KF5/Qt5 will be there, people from Plasma team will be there, in fact, I don't think we could have a better, more focused opportunity than this one. Plus, the meeting being in the mountains makes for high productivity workplace with little to no distractions.

Please help make this legend meeting happen this year by donating and in turn getting more awesomeness. So far we have almost enough to get everyone there, but we could use couple more €s to give the developers also some food during the week (and obviously pay all the other needed expenses;). Head over to to read all the details and to become a never forgotten hero on the list of donors. Oh and please do it right now as the fundraising ends tomorrow.

Thank you for every € you send.

Categories: FLOSS Project Planets

Editing Mode for Polygons – GSoC Project Progress

Tue, 2014-07-08 08:14

Hello everyone!

There are a couple of weeks from the last time I’ve talked about my progress with my Google Summer of Code project. Thus, the plugin I’m working on has undergone major changes, but most of them are code-related and were intended to increase the internal quality, which will hopefully lead to an increase in the productivity on the plugin. I’ll present these changes briefly, but I won’t insist too much on them since I don’t want this post to become too technical (at least not for now). Also, I’ve added two important new features to the plugin which make it much more interactive. I’ll talk about them more into detail in the next paragraphs.

But first I find important to remind you (and possibly make aware the people who did not read my previous posts) about the intent of my project – what is all about and who can use it. The application to which I contribute is Marble, a virtual globe and a world atlas, which hides under this short introduction many useful and even fun features as well as a dedicated team which works on it. What I’m specifically working on is a plugin, named Annotate Plugin, which allows people to add placemarks and ground overlays onto the map, as well as draw polygons on the Earth’s surface (or even Pluto’s if you have a map for it :) ). Why would you need this? Well, maybe you lead a presentation about human migration and you know that visual effects have a great impact on participants, so you decide to mark particular regions from which many people migrate to other countries. You can even use different colors to give people an insight into their number. Or maybe you are a student and since you love technology, you can make the assessment at that boring subject a little bit more interesting for both you and your teacher.

Hoping that what I’m spending my time on this summer is a little bit clearer, I will proceed with presenting my progress during the last weeks. As I mentioned in the beginning of this post, a great percentage of my work involved re-writing from scratch an important part of the plugin because of its bad design which caused its extending to become very time-consuming. The idea for the code refactoring was that the whole Editing Mode should be organized on states. That is, Add Polygon, Add Polygon Hole, Add Placemark, etc are all independent states which means they could not exist at the same time (or if they could, they should not, since it would cause redundancy). This idea, alongside other improvements, took me almost one week and a half to implement, but now it is fully functional and the code looks much better.

Excepting the code refactoring, I also added two new features which make the editing of a polygon much more interactive and let the users draw any shape. The first one I implemented is Merging Nodes. The flow is as it follows: enter the Merging Nodes state and then start clicking the nodes you want to merge, in pairs of two. Be aware that it is not possible to merge two nodes from two different polygons, or two nodes from two different inner boundaries of the same polygon, or a node from a polygon’s outer boundary with a node from one of its inner boundaries.

The second feature I implemented is Adding Nodes, which introduces something totally new to the Editing Mode: the possibility of adding new nodes to an already drawn polygon. The flow here is: enter the Adding Nodes state and then you will see when hovering the middle of each polygon’s line that a virtual node will appear and will disappear when moving the cursor from above it. To make it become a real node simply click it and immediately after that you will be adjusting the new node. After clicking it one more time, it will stick to that position and the process can be repeated.

Since both features are hard to be illustrated in screenshots, I made a screencast which shows it best. You can find it here (it looks much better in full screen mode).

Ok, so this is it. I hope you enjoyed it and made you curious enough to give it a try :).

Later Edit: The patch including these two features has not been merged to master yet (I only submitted a review request this morning) so if you are eager to test it already, you can find the patch here.

See you,
Călin Cruceru

Categories: FLOSS Project Planets

Frameworks 5.0 is out!

Tue, 2014-07-08 02:39

It took us a while but here it is. And I think we did quite a decent job communicating this to the outside world with articles like

For the final release (this one, I mean) we had far less time than I had hoped - I wrote most of the announcement last Saturday (though we started a few days earlier and had our already-prepared communication plan) and we didn't really rally as much of our 'friends' to help promote the release as I had planned initially. And yesterday my internet broke down so in the end, Mario and Jonathan R had to put most of it live, half a day behind plan. But despite these issues, it did turn out quite well I think.

Now the communication is done and it is up to the code to prove itself in real life!

As I blogged before, I think this is a huge deal for Free Software on desktops AND mobile devices - it goes far beyond the KDE community. Qt is by far the largest Free Software ecosystem doing native (non-web, I mean) end-user software, but much of that is proprietary. Which makes sense - Digia and the other companies in and around Qt have to make money and don't have 'spreading Free Software' as their prime goal. Frameworks introduces a genuine FOSS touch to that, hopefully bringing many of these developers in touch with the KDE community and the Open Source development processes.

Oh, and just a few more days to go to support Randa 2014, and support it really needs. Remember, this is where Frameworks started! Let's see what KDE comes up with this year at Randa ;-)
Categories: FLOSS Project Planets

Awesome people!

Tue, 2014-07-08 02:28
You might have seen that KDE has a new Konqi drawing. Like our previous mascot, you don't see Konqi very often. That is not just because we don't love Konqi (at least, I do) but also because we don't have that many pretty pictures of Konqi.

For articles I'm always in a pickle when it comes to adding some images. I've been a bit creative myself but it often leads to things like this:

It is probably creative (if you get it, that is) but it is not very good. For an article about bug hunting I cut out the Konqi on the right out of the group pic. Yeah, also - it kind of works, but barely.

Then it hit me. Why not ask the artist who made these Konqi's to... make some more! I emailed Tyson Tan and he simply replied asking what and when.

And now: some awesome Konqi's are coming to the dot! Today, we released the first in a hopefully long line of articles with Awesome Konqi's. Check out our Frameworks konqi below:

As sebas said: I'm actually quite impressed how well it depicts something as deeply technical and abstract as Frameworks 5.
Indeed. This makes me happy!

Categories: FLOSS Project Planets

Every Day Carry 2014

Mon, 2014-07-07 19:44

A fun sub-community of reddit is the one that just posts pictures of what they carry every day. I took some time to unpack my bag the other day and posted one myself:

I'm usually on-call, which means I always have to have some sort of tech that'll get me online to address a faulty system. As such, I generally have a laptop or something with a hardware keyboard on it. Not optimal, but it's the life I've chosen.

Ring 0/1/2

Ring 1 is when I'm stepping out to run errands, Ring 2 is when I'm going out for the day but will be home in the evening if I actually want my laptop to work on side-projects. Keep it light and simple.

Common to both rings (Ring 0)
  • Bag is Swiss Gear. Claims to be SA2310, but that doesn't actually exist online anywhere. It's from Target. All of this fits comfortably in the bag with room to spare.
  • Kindle Paperwhite, reading on transit
  • Moleskin hardback notebook w/ pilot precise
  • Generic 4000mah battery pack from target
  • Retractable USB
  • Keyring:
  • Yubikey 2fa token for getting on to my servers
  • Meenova microSD reader for Android
  • Sandisk USB 3 Cruzer with passwords and server authentication stuff
  • Gerber Shard
  • Keys
  • Binder clip with cards and cash
  • Nexus 5
  • Mints
  • Lip balm
  • Klipsch S3m
  • Lockpicks; sometimes a random lock to play with when bored
  • Pebble
  • print outs of GPG fingerprints for meetups, etc
Ring 1
  • Nokia n810
Ring 2
  • ASUS TF700 android laptop tablet thingy
Ring 3

I often stay with friends in the city, since it's basically impossible to get to where I live after midnight without the help of a taxi or Uber or so, plus sleep overs, man.

  • Case Logic laptop backpack
  • Laptop pocket: Lenovo X230
  • Main Pocket:
    • Ring 0
    • Two travel blankets for beach, or staying with friends
  • front pocket 1:
    • Transit time tables for when my phone inevitably dies
    • foodbars
    • flashlight
    • hand soap stuff
  • front pocket 2:
    • Tactical scotch
    • retractable ethernet
    • deoderant
    • ibuprofen and excedrin because reasons
Categories: FLOSS Project Planets

Frameworks 5 and Plasma 5 almost done!

Mon, 2014-07-07 09:42
KDE Project:

KDE Frameworks 5 is due out today, the most exciting clean-up of libraries KDE has seen in years. Use KDE classes without brining in the rest of kdelibs. Packaging for Kubuntu is almost all green and Rohan should be uploading it to Utopic this week.

Plasma 5 packages are being made now. We're always looking for people to help out with packaging, if you want to be part of making your distro do join us in #kubuntu-devel

Categories: FLOSS Project Planets

Porting KDE Games: Progress Report

Mon, 2014-07-07 02:00

Hello everyone,

As I mentioned in my last blog post, I’m working on porting KDE Games to KF5/Qt5. After porting the libkdegames project, I ported three games - KMines, KNavalBattle and KBounce to test how these build against the new libs. Everything works as expected as of now. Here are the screenshots of the three games:

You can build the ported games and libraries from clones/{project name}/anujpahuja hosted on quickgit and give your reviews.

I’d also like to add something about how KStandardDirs -> QStandardPaths port works. While porting, I found out that QStandardPaths::StandardLocation searches for files in specific locations on Linux. For example, QStandardPaths::GenericDataLocation will only look for files/directories in these locations - “~/.local/share”, “/usr/local/share”, “/usr/share”. So be careful when installing files to a custom directory. I explicitly added a DATA_INSTALL_DIR variable in the build system to ensure files are being installed in the locations as specified by QStandardPaths::StandardLocation. Better fixes to this are most welcomed.

There’s still a couple of small things that need to be done to make the games fully ready. Will keep you updated.


Categories: FLOSS Project Planets

Libkeduvocdocument Qt5/KDE Frameworks port

Sun, 2014-07-06 20:49
Hello all. Yes I'm still alive. Yes I'm still doing KDE stuff as I find time or make time. I'll report in the next few posts about what is happening and where we are going.One of the things that happened recently was the port of libkeduvocdocument to Qt5 and frameworks. Vishesh started the effort, and I completed it with some review by Aleix Pol. It was decided as documented here that since libkdeedu only contains libkeduvocdocument it should be split up. Upon further investigation we realized that the other parts of libkdeedu are not used anymore. Besides the icons subfolder that are still looking for a home, the rest of the git repo is only libkeduvocdocument related, so we decided to just rename the git repository for the frameworks and going forward release. So the libkdeedu git repository holds the kde sc 4 codebase, while the libkeduvocdocument git repo holds the qt5 and frameworks based code. Both contain the history so all history is preserved. 
I'll write next time about the steps taken to port the library to Qt5 and KDE Frameworks.
Categories: FLOSS Project Planets

Plasma Next Accessibility

Sun, 2014-07-06 17:01


it’s been a looong time since I wrote in this blog, lately things usually end up in the Qt blog. I hope everyone is reading up on accessibility and other fun things Qt there My contributions to KDE were code-wise mostly small things (such as helping a little with porting Kate’s accessibility implementation to Qt 5) and I’m happy that Parley found new maintainers (thanks Amarvir and Inge!).

I’ve been wondering for quite some time though how the state of Plasma Next is when it comes to accessibility. In this case accessibility is mostly how the applications and desktop shell expose semantics to the accessibility framework via an API (on Linux the beast is called AT-SPI, a DBus API). The goal is that assistive technology such as a screen readers (Orca), the screen magnifier, or Simon can pick up what’s going on and assist the user. This allows for example blind people to use the software. The big thing here is that while Qt never had good support for QGraphicsView accessibility, we plowed away at making things work well with Qt Quick. This afternoon I finally got around to looking at the next iteration of the KDE desktop for real. In fact I’m writing this in a running Plasma Next session on top of the frameworks 5 libraries. It feels a bit like the porting from KDE 3 to 4, except that most things seem to just work so far.

I ended up running the neon5 packages on top of Kubuntu. I don’t manage to keep up with the speed of KF5 development and whenever I want to run anything I’d be building and the day is usually over when I figured out the dependencies and such. Instead I now ignored all warnings (big red signs saying: “don’t do this at home kids”) and installed the daily build packages (for some reason the weekly one didn’t work at all on my laptop). After that I built and installed the things I was interested in over the system packages to be able to debug them. This is a horrible way of messing up one’s installation, but since the neon packages install nicely into /opt, I decided to go for it. I ended up wanting to debug a few things, so now I have my own build of qtbase+declarative+plasma-frameworks+plasma-workspace+plasma-desktop.

The first issue I ran into is that Gnome changed their interpretation of a settings value which is reflected on DBus in the a11y service… the property IsScreenReaderEnabled will now return false, even when a screen reader is running. I’ll have to find a better way of starting Qt’s accessibility on Linux, hopefully for Qt 5.4. There is no real standard to these things since so far it was a Gnome-only thing.

For now the work-around is to simply run this one-liner before trying anything else:
gsettings set org.gnome.desktop.a11y.applications screen-reader-enabled true

With that in place I could restart plasmashell and see that … there was not much to see.
Before running a screen reader, it makes sense in this case to use one of the explore and debugging tools first to see what is exposed about the desktop.
I ran “Randamizer” which is an example that comes with libkdeaccessibilityclient, the alternative is “Accerciser”. Both tools now showed me plasmashell (yay!) and I could navigate to see the hierarchy of accessible elements. Which turns out to be completely empty (meh). I was not quite sure if this was due to bugs or simply really no information was provided, luckily (that’s easier to fix) it turned out to be the second one.

I started writing a few patches for the above mentioned plasma repositories and now I have at least a few things showing up, I can see the clock is there and even see the time displayed. I can also see the desktop element as such and some panels. The patches need cleaning up, but I’m happy that getting the basics to work will be relatively easy.
Now I don’t think I’ll manage to go though all of KDE and Plasma to fix simple issues, even though I’ll try to get the basics covered in the next few days. Consider this is a call for help, please join KDE’s accessibility mailing list (IRC is also good) and help out.
The most basic work is to look at the Qt Quick items used in Plasma and add some simple information to them. For example for the Button.qml in plasma-frameworks the patch will add these lines:

Accessible.role: Accessible.Button text
Accessible.description: tooltip
Accessible.checkable: checkable
Accessible.checked: checked

They are just a few bindings, but with QML it is impossible to automatically detect the meaning of a an item – is it just decorative or part of a control? Where is the actual control? Which Rectangle has which significance?
This is why it’s important for authors of Qt Quick UIs to use standard controls where possible. Qt Quick Controls shipped with Qt are accessible, the Plasma ones will soon be, I hope When writing custom controls, the properties as shown above need to be added.
For more information on what properties are available and should be set, check the documentation for the Accessible attached property.

Categories: FLOSS Project Planets

Natural string sorting, an in depth analysis

Sun, 2014-07-06 15:18

Hi KDE folks,

Update: content updated! The blog post has been updated all over the place with proper timings.

Yes, a blog post about natural string sorting was bound to happen. Given that I’ve been playing with that for quite a long time. This post is all about the performance of sorting strings in a way consider natural, but machines don’t.

What does a natural order look like? Here is a small example.


You get the point. That is a natural sort order for people. Computers on the other hand don’t think so. When they sort a string they look at each individual character. Looking at numbers in that way causes computers to sort the same example as:



See the issue at “10.txt” and “11.txt”. Well, that is caused by looking at individual characters for sorting. How exactly that issue is solved on a technical area is beyond this blog post. If you want, you can investigate the KDE function: KStringHandler::naturalCompare.

Currently in KDE SC 4.xx, more specifically in Dolphin, natural sorting is done using this very nifty function. It works just fine and has been doing so for many years. But it has a downside: it’s not really fast. Usually you don’t bother too much about that since you don’t often sort, right? Well, partly. In KDE’s Dolphin you get sorted results by default. This in turn means that you will notice any slowdowns if the sorting takes more then 100ms (0.1 second). That in turn also depends on the person. I had reported this issue in the Dolphin camp quite a while ago and they “solved” it by implementing a sorting algorithm that runs on multiple cores thus is usually done faster. The KStringHandler::naturalCompare function is still used, just on more cores :)

All this isn’t new. Dolphin developers have done really awesome work in speeding this up and it’s been implemented in dolphin since a couple releases (KDE Applications 4.11?).

This still leaves us with a – in my opinion – slow KStringHandler::naturalCompare. Isn’t there anything we can do to speed it up? Till Qt version 5.1 there was nothing we could do about it. Starting with Qt 5.2 we have two awesome new shine classes:
- QCollator
- QCollatorSortKey

QCollator can be seen as a class that you create that sets the rules for comparing strings. So we want for instance numbers to be sorted in a way we like them (10 after 9, not 1). That class allows us to set those rules. Then we can call QCollator::compare with the strings to compare and they would be checked to those rules.

QCollatorSortKey is a bit different. You can ask QCollator (by calling “sortKey(your string)”) to give a pre-computed key that obeys the rules set in QCollator. When you want to compare two string you would then compare the QCollatorSortKey objects for those strings. It adds a bit of extra bookkeeping so you need to wonder if the extra bookkeeping is worth using it. QCollator::compare is very easy after all.

To know for sure i started benchmarking those 3 options:
- Sorting using KStringHandler::naturalCompare
- Sorting using QCollator::compare
- Sorting using QCollatorSortKey

This chart shows the benchmarking results i gathered with those 4 different methods in Qt build with -developer-build, aka debug + something else. Shorter is better.


The following chart is gathered with Qt build in release mode and debug symbols added. That’s about the mode you would have (minus the debug symbols) when you install Qt from your distribution.

A special note for QCollatorSortKey. It is benchmarked with the overhead time of calling QCollator::sortKey and adding additional bookmarking data. In other terms, the timings for QCollatorSortKey are including all overhead! If i where to remove the overhead and just measure the actual sorting time then all it’s timings are cut in half.

Before i started benchmarking i had some ideas about which one would be faster. I was expecting QCollatorSortKey to be the fastest so no surprise for me there. It’s the other two where i’m really surprised.

I was expecting QCollator::compare to beat KStringHandler::naturalCompare since the former is going to replace the later in a KDE Frameworks world. I wasn’t expecting QCollator::compare to be this much slower, rather my expectation was it to be somewhat faster then what we had. Apparently KStringHandler::naturalCompare still has a big reason to be used when looking at the performance numbers.

Another thing i wasn’t expecting was QCollatorSortKey to _always_ be faster then the other two alternatives. I was under the impression that for a few compares QCollator::compare was faster. The documentation even says so, but these benchmarking results clearly show that QCollatorSortKey is just faster (again, with all overhead included!).

Based on the chart we can draw some conclusions.
1. KStringHandler::naturalCompare is far more superior to QCollator::compare when it comes to performance.
2. QCollatorSortKey beats everyone.
3. If you never have more then 100 items to sort then it really doesn’t matter which one you take. All 3 options complete the sorting very fast. If this is your case then i would go for ease of coding and go for QCollator::compare. I completely revise this stance based on the new performance numbers. You should consider using KStringHandler::naturalCompare when you need a simple natural string compare method. If you have more time then you should go for QCollatorSortKey. I cannot make up any reason anymore to use QCollator::compare. It’s just not as performant as the alternatives.
4. Do you want your sorting to still be fast with insanely high numbers of items to sort? Go for QCollatorSortKey. For file browsers (hello Dolphin and Accretion) i would always go for QCollatorSortKey.

If you want to repeat these results, here is the code i used. It needs C++11 since i use some C++11 features.

#include <QCoreApplication> #include <QDebug> #include <QVector> #include <vector> #include <QCollator> #include <QElapsedTimer> #include <random> #include <algorithm> QVector<QString> massiveFolderRepresentation; // Copied from kstringhandler.cpp in kdecore (part of kdelibs). int naturalCompare(const QString &_a, const QString &_b, Qt::CaseSensitivity caseSensitivity) { // This method chops the input a and b into pieces of // digits and non-digits (a1.05 becomes a | 1 | . | 05) // and compares these pieces of a and b to each other // (first with first, second with second, ...). // // This is based on the natural sort order code code by Martin Pool // // Martin Pool agreed to license this under LGPL or GPL. // FIXME: Using toLower() to implement case insensitive comparison is // sub-optimal, but is needed because we compare strings with // localeAwareCompare(), which does not know about case sensitivity. // A task has been filled for this in Qt Task Tracker with ID 205990. // QString a; QString b; if (caseSensitivity == Qt::CaseSensitive) { a = _a; b = _b; } else { a = _a.toLower(); b = _b.toLower(); } const QChar* currA = a.unicode(); // iterator over a const QChar* currB = b.unicode(); // iterator over b if (currA == currB) { return 0; } while (!currA->isNull() && !currB->isNull()) { const QChar* begSeqA = currA; // beginning of a new character sequence of a const QChar* begSeqB = currB; if (currA->unicode() == QChar::ObjectReplacementCharacter) { return 1; } if (currB->unicode() == QChar::ObjectReplacementCharacter) { return -1; } if (currA->unicode() == QChar::ReplacementCharacter) { return 1; } if (currB->unicode() == QChar::ReplacementCharacter) { return -1; } // find sequence of characters ending at the first non-character while (!currA->isNull() && !currA->isDigit() && !currA->isPunct() && !currA->isSpace()) { ++currA; } while (!currB->isNull() && !currB->isDigit() && !currB->isPunct() && !currB->isSpace()) { ++currB; } // compare these sequences const QStringRef& subA(a.midRef(begSeqA - a.unicode(), currA - begSeqA)); const QStringRef& subB(b.midRef(begSeqB - b.unicode(), currB - begSeqB)); const int cmp = QStringRef::localeAwareCompare(subA, subB); if (cmp != 0) { return cmp < 0 ? -1 : +1; } if (currA->isNull() || currB->isNull()) { break; } // find sequence of characters ending at the first non-character while ((currA->isPunct() || currA->isSpace()) && (currB->isPunct() || currB->isSpace())) { if (*currA != *currB) { return (*currA < *currB) ? -1 : +1; } ++currA; ++currB; if (currA->isNull() || currB->isNull()) { break; } } // now some digits follow... if ((*currA == QLatin1Char('0')) || (*currB == QLatin1Char('0'))) { // one digit-sequence starts with 0 -> assume we are in a fraction part // do left aligned comparison (numbers are considered left aligned) while (1) { if (!currA->isDigit() && !currB->isDigit()) { break; } else if (!currA->isDigit()) { return +1; } else if (!currB->isDigit()) { return -1; } else if (*currA < *currB) { return -1; } else if (*currA > *currB) { return + 1; } ++currA; ++currB; } } else { // No digit-sequence starts with 0 -> assume we are looking at some integer // do right aligned comparison. // // The longest run of digits wins. That aside, the greatest // value wins, but we can't know that it will until we've scanned // both numbers to know that they have the same magnitude. bool isFirstRun = true; int weight = 0; while (1) { if (!currA->isDigit() && !currB->isDigit()) { if (weight != 0) { return weight; } break; } else if (!currA->isDigit()) { if (isFirstRun) { return *currA < *currB ? -1 : +1; } else { return -1; } } else if (!currB->isDigit()) { if (isFirstRun) { return *currA < *currB ? -1 : +1; } else { return +1; } } else if ((*currA < *currB) && (weight == 0)) { weight = -1; } else if ((*currA > *currB) && (weight == 0)) { weight = + 1; } ++currA; ++currB; isFirstRun = false; } } } if (currA->isNull() && currB->isNull()) { return 0; } return currA->isNull() ? -1 : + 1; } void doSort(int num) { qDebug() << "--- doSort called with num:" << num; QVector<QString> localBatch = massiveFolderRepresentation.mid(0, num); // Now that we're random.. Sorting time! For this we first setup the QCollator class. QCollator col; col.setNumericMode(true); // THIS is important. This makes sure that numbers are sorted in a human natural way! col.setCaseSensitivity(Qt::CaseInsensitive); // Create a timer object. We want to time this! QElapsedTimer t; // Now we can do actaul sorting. This is the naive approach which is the easiest approach to use. QVector<QString> naiveCopy = localBatch; t.restart(); std::sort(naiveCopy.begin(), naiveCopy.end(), [&](const QString& a, const QString& b) { return, b) < 0; }); qDebug() << QString("Done sorting <naive version>. It took %1 ms").arg(QString::number(t.elapsed())); // Next is the QCollatorSortKey approach. This required a bit more bookkeeping. t.restart(); std::vector<QCollatorSortKey> sortKeys; // QVector doesn't work because of the private copy constructor.. It does work with std::vector which probably does move semantics QVector<int> keys; for(int i = 0; i < num; i++) { sortKeys.emplace_back(col.sortKey(localBatch[i])); keys.append(i); } std::sort(keys.begin(), keys.end(), [&](int a, int b) { return sortKeys[a] < sortKeys[b]; }); QVector<QString> colKeySorted; for(int i : keys) { colKeySorted.append(localBatch[i]); } qDebug() << QString("Done sorting <QCollatorSortKey version>. It took %1 ms").arg(QString::number(t.elapsed())); // Just for fun, the KDE 4.xx approach QVector<QString> kdeCopy = localBatch; t.restart(); std::sort(kdeCopy.begin(), kdeCopy.end(), [&](const QString& a, const QString& b) { return naturalCompare(a, b, Qt::CaseInsensitive) < 0; }); qDebug() << QString("Done sorting <KDE version>. It took %1 ms").arg(QString::number(t.elapsed())); } int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); // Fill our vector with fake filenames. for(int i = 0; i < 500000; i++) { massiveFolderRepresentation.append(QString("%1.txt").arg(QString::number(i))); } // But now the vector is sorted. Shuffle it! std::random_device rd; std::mt19937 g(rd()); std::shuffle(massiveFolderRepresentation.begin(), massiveFolderRepresentation.end(), g); doSort(10); doSort(100); doSort(500); doSort(1000); doSort(5000); doSort(10000); doSort(20000); doSort(30000); doSort(40000); doSort(50000); doSort(60000); doSort(70000); doSort(80000); doSort(90000); doSort(100000); doSort(200000); doSort(300000); doSort(400000); doSort(500000); return 0; }

Next up is investigating if this is the most performance we can drag out of QCollatorSortKey. More performance there would mean patches against Qt. So where is the time being spend when we sort in QCollatorSortKey? Surely in std::sort comparing the internal bytes, right? Aka, not much more we can optimize.

To profile this we add a header in the above source file:
#include <valgrind/callgrind.h>

Then we set points from where we want to have profiling. You can replace your doSort function with this one:

void doSort(int num) { qDebug() << "--- doSort called with num:" << num; QVector<QString> localBatch = massiveFolderRepresentation.mid(0, num); // Now that we're random.. Sorting time! For this we first setup the QCollator class. QCollator col; col.setNumericMode(true); // THIS is important. This makes sure that numbers are sorted in a human natural way! col.setCaseSensitivity(Qt::CaseInsensitive); // Create a timer object. We want to time this! QElapsedTimer t; // Now we can do actaul sorting. This is the naive approach which is the easiest approach to use. QVector<QString> naiveCopy = localBatch; t.restart(); std::sort(naiveCopy.begin(), naiveCopy.end(), [&](const QString& a, const QString& b) { return, b) < 0; }); qDebug() << QString("Done sorting <naive version>. It took %1 ms").arg(QString::number(t.elapsed())); // Next is the QCollatorSortKey approach. This required a bit more bookkeeping. t.restart(); CALLGRIND_START_INSTRUMENTATION; std::vector<QCollatorSortKey> sortKeys; // QVector doesn't work because of the private copy constructor.. It does work with std::vector which probably does move semantics QVector<int> keys; for(int i = 0; i < num; i++) { sortKeys.emplace_back(col.sortKey(localBatch[i])); keys.append(i); } std::sort(keys.begin(), keys.end(), [&](int a, int b) { return sortKeys[a] < sortKeys[b]; }); QVector<QString> colKeySorted; for(int i : keys) { colKeySorted.append(localBatch[i]); } CALLGRIND_STOP_INSTRUMENTATION; qDebug() << QString("Done sorting <QCollatorSortKey version>. It took %1 ms").arg(QString::number(t.elapsed())); // Just for fun, the KDE 4.xx approach QVector<QString> kdeCopy = localBatch; t.restart(); std::sort(kdeCopy.begin(), kdeCopy.end(), [&](const QString& a, const QString& b) { return naturalCompare(a, b, Qt::CaseInsensitive) < 0; }); qDebug() << QString("Done sorting <KDE version>. It took %1 ms").arg(QString::number(t.elapsed())); }

Then compile it (obviously in release mode with debug symbols) and run the executable through valgrind with a command like this:
valgrind –tool=callgrind –instr-atstart=no <executable>

This gives you a file named callgrind.out.<somenumber> You should open kcachegrind with that to read it. I don’t know how others do this, but in this case i’m looking at the actual sort function and the sortKey function cost. For both in the deepest level till the line you find the function that Qt is calling that isn’t defined in Qt.
For the actual sorting that is the line:
__memcmp_sse4_1 (at a cost of 5,79% of the total execution time)

For the sortKey that is the function call:
ucol_getSortKey_53 (at a cost of 27,10% of the total execution time)

+ 0,5% of the total running time create our new vector with the sorted strings.

Note: when i say “total running time” i mean the time that is instrumented by valgrind. This is not the total execution time when you start the app.

Now we immediately have a good idea of the performance we get and the actual costs.
Actual costs: 5,79% + 27,10% + 0,5% = 33,39%
Which means that 66,61% is spend in overhead, Checks and whatever else is going on.

First, lets look at the 5,7% of the memcmp.

Look at the image. Within the qstrcmp we have 4 things using cpu:
20.5%, QByteArray::constData()
5,79%, __memcmp_sse4_1
3,49%, QByteArray::length()
2,84%, qMin

Remember, every single function that is being called in __memcmp_sse4_1 is called in O(n log n) time. That is a LOT more then O(n). If we just reduce the function calls there we can easily save 20% of the total running cost thus that alone making our sorting 20% faster. The issue is that this function (qstrcmp which calls __memcmp_sse4_1) isn’t wrong at all. It’s just fine for one QByteArray. However, when we have a big list of QByteArray objects then it becomes faster to maintain bookkeeping of the length of all items and the data pointer to them. Which would completely throw away the calls to constData and length at a cost of accessing an array where that same data is stored. It will be faster, but for Qt it’s not worth it. It’s too specific. For us it would be worth it.

Update the above was done with a Qt build that was compiled with “-developer-build”. I was expecting that to be equal to “release with debug symbols”, but it’s equal to “debug with some more” thus giving me a wrong picture when benchmarking and profiling. I let the above in this post because it was the reason for me to try a different approach. However, in -release mode you don’t see any calls to ::constData anymore. It’s a free function call (0 cost) along with other calls that became free in release mode. That’s why in the charts above (the one for the release build) the timings for QCollatorSortKey are down quite significantly between debug and release. However, the above does show potential data locality improvements that can be made. A general rule of thumb when optimising for high speed is to have data as close together as possible.

An even more ideal approach would be for Qt to have a sortKey function that fills a buffer that we provide it. We then maintain everything ourselves which allows for far easier optimisations then to hack in the Qt code. Another big advantage of that approach is that we control the data locality.

As a proof of concept i implemented that ideal approach in Qt. I fear that it won’t be accepted upstream, but i can always try. The diff:

diff --git a/src/corelib/tools/qcollator.h b/src/corelib/tools/qcollator.h index 781e95b..a9fcfc6 100644 --- a/src/corelib/tools/qcollator.h +++ b/src/corelib/tools/qcollator.h @@ -119,6 +119,7 @@ public: { return compare(s1, s2) < 0; } QCollatorSortKey sortKey(const QString &string) const; + void sortKey(const QString &string, char** buffer, int bufferLength) const; private: QCollatorPrivate *d; diff --git a/src/corelib/tools/qcollator_icu.cpp b/src/corelib/tools/qcollator_icu.cpp index 407a493..e0f91f2 100644 --- a/src/corelib/tools/qcollator_icu.cpp +++ b/src/corelib/tools/qcollator_icu.cpp @@ -151,6 +151,12 @@ QCollatorSortKey QCollator::sortKey(const QString &string) const return QCollatorSortKey(new QCollatorSortKeyPrivate(result)); } +void QCollator::sortKey(const QString &string, char** buffer, int bufferLength) const +{ + ucol_getSortKey(d->collator, (const UChar *)string.constData(), + string.size(), (uint8_t *)*buffer, bufferLength); +} + int QCollatorSortKey::compare(const QCollatorSortKey &otherKey) const { return qstrcmp(d->m_key, otherKey.d->m_key);

I used the new function in my benchmark and the results are simply stunning as the following graph shows (shorter bars is better). Taken with Qt build in release mode. (+typo of optimized..)

As you can see in the chart, the speedup in the optimised version is always faster, but you won’t notice that if you sort less then 5000 items. Interesting though is the increased performance improvement in the higher sorting numbers. The optimised sorting is about 80% faster then the non optimised version. I’m guessing this is due to data locality which in turn causes for more efficient usage of the CPU cache. And let me stress it again, this is a timing including all bookkeeping overhead! If i would just measure the sorting speed then the 500.000 sort takes just 180 milliseconds aka 0,18 seconds. The vast majority of the time spend right now is in creating the sortKey with roughly 65%. I can probably find faster ways there as well, but i don’t really want to fiddle with ICU internals :)

There you have it, an in depth analysis of how to do natural string sorting. You know know your options and which one is the fastest.

I hope you liked this insanely long blog post. It took me quite some hours to write it.


Categories: FLOSS Project Planets

Function call-tips and automatic declaration of object members

Sun, 2014-07-06 12:41

I usually write a blog post once every week, but this time I have two features thay may already interest some people :-) . Now that the QML support in KDevelop is nearly complete and quite usable (several things are still missing though, for instance the support for directory imports), I wanted to work on Javascript-related things that I had to put aside for the past month or so.

The first feature is function call-tips, that display the signature of the current function (or functions, if they are nested) as you type. This way, you can see the name and the type of the arguments, which can be very useful. This feature is available in Javascript and QML, and my example is, in fact, a QML code snippet.

The second feature concerns only Javascript and is the automatic declaration of object members to which a value is assigned. Objects (or arrays) can be declared by two means in Javascript: using object literals (var o = {key: value, key2: value2};) and using an empty object literal, just to say that the variable is an object, and then each key is assigned its value separately:

1 2 3 4var o = {}; o.key = value; o.key2 = value2;

The two types of object declarations are widely used, the second one maybe even more than the first one. For instance, Javascript developers are encouraged to declare their functions in “namespaces”, that are simply objects in the global scope :

1 2 3 4var APP = {}; APP.say_hello = function() { alert("Hello"); }; APP.say_world = function() { alert("World !"); };

The QML/JS plugin supports this by allowing the declaration of object members after the object itself has been created. This is fairly simple: when object.member is encountered in the source code, the plugin checks whether member already exists in object. If it is not the case, the member is declared on the fly and added to the object.

Here are some technical details. The fact that the declaration of the member happens after the object has been completely created obliged me to cheat: KDevelop doesn’t allow declarations to live outside their context (so I cannot declare the “say_hello” member of my example in the context that exists between the brackets of APP), but a context can import another one, even if the other one lives in a different file or anywhere else (this is what is used to implement import statements or C/C++ includes). The solution is therefore to declare say_hello in a new context that spans the “say_hello” identifier, and to import this context in APP. This way, “say_hello” becomes visible to APP, and code-completion works as expected:

I’m now working on function prototypes and Javascript object-oriented programming. This is a bit more complicated than what I’ve thought, but I think I have found a possible solution. I will implement it tomorrow and I hope that it will make things like jQuery a bit more usable in KDevelop.

Categories: FLOSS Project Planets

opportunities presented by multi-process architectures

Sun, 2014-07-06 10:50

So after stating that I was thinking about what comes "after convergence" I rambled on a bit about social networks and the cloud before veering wildly towards applications written as sets of processes, or multi-process architectures (MPA). I wrote about the common benefits of MPA and then the common downsides of them. Before trying to pull this full-circle back to the original starting point all those blog entries ago, I need to say a few things about opportunities presented by MPA systems that are rarely taken advantage of and why that is.

I don't want to give the impression that I think I've come up with the ideas in this blog entry. Most already exist in production somewhere in the world out there. There are certainly great academic papers to be found on each of the topics below. It just seems to be a set of topics that is not commonly known, thought about or discussed, particularly outside the server room. So nothing here is new, though I suspect at least some of the ideas might be new to many of the readers.

Before getting into the topics at hand, I'd also like to give a nod to the Erlang community. When I started toying about with the questions that inspired this line of though, I went looking for solutions that already existed. There almost always is someone, somewhere working on a particular way to approach a given problem. This is the beauty of a global Internet with so many people trained to work with computers. While looking around at various things, Erlang caught my eye as it is designed for radical MPA applications. While messing around with Erlang I began to understand just how desperately little most applications were getting out of the MPA pattern, and it helped illuminate new possible paths of exploration to tough challenges we face such as actually getting the most out of "convergence" when the reality is people have multiple devices (if only because each person has their own device). It's pretty amazing what many of the people in that community are up to .. for instance:

Hot UpgradesMay as well start with this one since I posted that video above; that way I can pretend it was a clever segue instead of just nerd porn. ;) Typically when we think of upgrading an application, we think of upgrading all of it. We also generally expect to have to restart the application to get the benefits of those upgrades. Our package managers on Linux often tell us that explicitly, in fact.
There are exceptions to this. The most obvious example is plugins: you can change a plugin on disk and, at least on next start, you get new functionality without touching the rest of the installed application components. When the plugins are scripted, it becomes possible to do this at runtime. 
In fact, sometime between Plasma version 4.2 and 4.4 I worked up a patch that reloaded widgets written in Javascript when they changed on disk. With that patch, the desktop could just continue on and pieces could be upgraded here or there. I never merged it into the mainline code as I wasn't convinced it would be a commonly useful thing and there was some overhead to it all.
Generally, however, applications are upgraded monolithically. This is in large part because componentization, MPA or not, is not as common as it ought to be. It's also in part because components rely on interfaces and too many developers lack the discipline to keep those interfaces consistent over time. (The web is the worst for this, but every time I see Firefox checking plugins after a minor version upgrade I shake my head.) It also doesn't fit very nicely into the software delivery systems we typically use, all of which tend to be built around the application level in granularity.
Those are all solvable issues. What is less obvious is how to handle code reloading at runtime when the system is in active use. When upgrading a single component while the application is running, how does one separate it safely from the rest of the running code to perform an in-place update of the code to be executed?
With an MPA approach the answer seems fairly straight-forward: bring up a new process, switch to using the new process, kill the old one. Since the processes are already separated from each other, upgrading one component on the fly ought to be possible without fiddling with the rest of the application. It's less straight-forward in practice, however, as one needs to handle any tasks the old process is still processing, any pending messages sitting in the old process' queue (or whatever is being used for message passing) and handing over all messaging connections to the new process. It is possible though, as the video above demonstrates, though even in Erlang where this was designed into the system one needs to approach with a measure of thoughtfulness.
This is an uncommon feature because it is not easy to get right and pretty well all the existing systems out there lack any support for it whatsoever. However, if we want applications that run "forever", then this needs to become part of our toolbox. Given that the Linux kernel people have been working on addressing this issue (granted, for them restarting the "application" means a device restart which is even more drastic than restarting a single application), our applications ought to catch up.Process hibernationMost MPA applications spin up processes and let them sit there until the process is finished being useful. For things like web and file servers using this design, the definition of "useful" is pretty obvious: when the client connection drops and/or the client request is done being handled, tear down the process. 
Even then there is a fly in the ointment: spinning up a process can take more time than one wants and so perhaps it makes sense to just have a bunch of processes sitting there laying in wait to process incoming requests one after the other. Having just started apache2 on my laptop to check that it does what I remember it doing, indeed I have a half-dozen httpd-prefork processes sitting around doing nothing but wasting resources. The documentation says: "Apache always tries to maintain several spare or idle server processes, which stand ready to serve incoming requests. In this way, clients do not need to wait for a new child processes to be forked before their requests can be served."
Keeping processes around can also make it a lot easier when creating an MPA structure: spin up everything you need and let it sit around. This is a great way to avoid complex "partial ready" states in the application.
With multiple processes, one would hope it would be possible simply hibernate a specific process on demand. That way it exists, for all intents and purposes, but isn't using resources. This keeps the application state simple and can avoid at least some of the process spin-up costs. Unfortunately, this is really, really hard to do with 100% fidelity. People have tried to add this kind of feature to the Linux kernel, but it never has made it in. One of the biggest challenges in handling files and sockets sensibly during hibernation.
This article on deploying node.js applications with systemd is interesting in its own rights, but also shows that people really want this kind of functionality. Of course, the article is just showing a way to stop and start a process based on request activity which isn't quite the same thing at all.
Surprise: Erlang actually has this covered. It isn't a magical silver bullet and you don't want to use it with processes that are under constant usage (it has overhead; this kind of feature necessarily always will), but it works reliably and indeed can help with resource usage.
Every time I look at all those processes in Akonadi just sitting there when they know they won't be doing a thing until the next sync or mail check I despair just a little. Every time I notice that Chrome has a process running for those tabs showing static content, chewing up 15-30 MB of memory each, I wish process hibernation was common.Radical MPAIf you look through the code of Akonadi resources, such as the imap resource in kdepim-runtime/resources/imap, one will quickly notice that they are really rather big monolithic applications. The imap resource spins up numerous finite state machines (though they often aren't implemented as "true" FSMs, but as procedural code that is setting and checking state-value-bearing variables everywhere) and handles large number of asynchronous jobs within its monolith. As a result it is just over ten thousand lines of code, and that isn't even counting the imap or the Akonadi/KDE generic resources libraries it uses. That's a large chunk of code doing a lot of things.
That's kind of odd. Akonadi is a MPA application, but its components are traditional single-process job multiplexers. This is through no fault or shortcoming of Akonadi or its developers. In fact, I feel kind of bad for referencing Akonadi so often in these entries because it is actually an extremely well done piece of software that has matured nicely by this point. It's just one of the few MPA applications written for the desktop in wide usage that one can't blame these kinds of warts on anything other than the underlying language and frameworks ... it's because Akonadi is good that I keep bringing it up as an example, as it highlights the limits of what is possible with the way we do things right now. Ok, enough "mea culpa" to the Akonadi team ... ;)
It would be very cool if the complex imap resource was itself an MPA system. State handling would dramatically simplify and due to this I am pretty sure a significant percentage of those 10,000+ lines of code would just vanish. (I took another drift through the code base of the imap resource this morning while the little one had a nap to check on these things. :)
Additionally, if it was "radically" MPA then a lot of the defensive programming could simply be dropped. Defensive programming is what most of us are quite used to: call a function, check for all the error states we can think of and respond accordingly and only then handle the "good" cases. This is problematic as humans are pretty bad at thinking of all possible error states when systems move beyond "trivial" in complexity. With radically MPA systems, however, each job can be handled by a separate process and should something other than success happens it can simply halt. No state needs to be preserved or reset; at most the other processes that are waiting for news back on how the job went may want to be informed that something went sideways so they can also either halt or continue on. (Yes, logging, too.) This not only makes the code much easier for humans to write, as one only needs to write for what is expected rather than try and list all things that are not unexpected, but makes the code base radically smaller as most of the "if (failureCondition)" branches that pepper our code today simply melt away.
This doesn't happen because processes are expensive and creating frameworks that can handle such radical MPA systems are hard to write and few pre-made ones exist.SupervisionWith any MPA system, but particularly radically MPA applications, the opportunity for process supervision arises. Supervision is when one process watches other processes and decides their fate: when to start them, when to stop them, when (and how!) to restart them on failure.
I first became interested in the possibilities for system robustness due to supervision when systems such as systemd started coming together. systemd, however, falls wildly short of the possibilities. For what is perhaps the definitive example of supervision one ought to look to Erlang.
In Erlang applications, which are encouraged to be MPA, one defines a supervision tree. Not only can you have individual processes supervised for failure (e.g.), but you can have nested trees of supervisors each with a policy to follow when processes are created and when they fail. You can, for instance, tell the supervisor that a particular group of processes all rely on each other and so should one fail they should all fail and be restarted. You can define timeouts for responsiveness, how many times to restart processes and such things.
This allows one to define the state of the full set of processes in a robust manner, from process execution through to finality. This is a key to robust, long-lived appliations.Processing non-localityWith the MPA model it is trivial to spread a task out across multiple machines. This is extremely common in the server room when it comes to large applications and as such is quite well understood. There are large numbers of libraries and frameworks that make this easier, from message queueing systems to service discovery mechanisms. The desirability of this is quite obvious: finishing large tasks quickly is often beyond the reach of individual machines. So put a bunch of them together and figure out how to make them work together on problems. Voila, problem solved. (Writing blogs is fun: you can make the amazingly difficult and complex appear in a puff of magic smoke with a simple "voila" .. ;)
Outside the server room this is hardly ever used, however. MPA systems aimed at non-server workloads tend to assume they all run on the same system. Well, we live in a different world than we did twenty years ago. 
People have multiple rather powerful computers with them. Right now I have my laptop, a tablet, two ARM devices and a smartphone. The printer sitting next to me isn't very powerful, but it runs a full modern OS as well.
We also routinely use services that exist on other machines, but instead of letting processes coordinate as we would if they were local we tend to blast data around in bulk or create complex protocols that allow the local machine to dictate to the remote machine what we'd like it to do for us. The protocols tend to result in multiple layers of indirection on the remote side: JSON over HTTP hits a web server which forwards the request to a bit of scripted code that interacts with the server to construct some sort of response which it then encodes into JSON and sends back over HTTP to be reconstituted on the local side ... and should the client ever want something ever so slightly different, too bad.
This makes using two end-user devices together really rather difficult. The byzantine client-server architecture ensures that it is non-trivial to do simple things and that significant pieces of software must be run on any devices one wishes to coordinate. People are aware of the annoyances: Apple has Handoff, KDE has KDE Connect, .. but what happens if I'm running Akonadi (that guy again!) on my Plasma Desktop system and I'd like it to be able to find results from data on my Plasma Active tablet? Yeah, not simple to solve .. unless Akonadi could run agents remotely as transparently as it can locally. Which would mean more complexity in the Akonadi code base, and every other MPA app that might benefit from this. As it is additional functionality and probably nobody has yet hit this use case, Akonadi is not capable of it and the use case is likely to never be fulfilled once people run into it.
It is the combination of thinking with blinders on ("JSON over HTTP.. merp merp!") and the difficulty level of "remote processes are the same as local processes" that prevents this from happening in places we could benefit from.Process migrationThis goes hand-in-hand with the remote process possibility, but takes it up a notch. In addition to having remote processes, how amazing would it be to be able to migrate processes between machines? This also has been done; it's a pretty common pattern in big multi-agent systems from what I understand. This carries a lot of the same requirements and challenges of process hibernation, and probably would only be possible with specially designated processes. Security is another concern, but not an insurmountable obstacle.
Along with lack of remote processes, this is probably the sole reason it is so hard to transfer the picture from your phone to your laptop: byzantine structures must exist on all systems to do the simplest of things, and every time we wish to do a new simple thing that byzantine structure needs an upgrade, typically on both sides. How awful. This is probably also my cue to moan about Bluetooth, but this blog entry is already long enough.SecurityMany MPA applications already get a security leg up by having their applications separated by the operating system. They can't poke around at each other's memory locations, crashing doesn't take down the whole system, etc. This is really just a few tips of what is really a monumental iceberg, however.
It ought to be possible to put each process in its own container with its own runtime restrictions. The Linux middleware people working on things like kdbus and systemd along with the plethora of containerization projects out there are racing to bring this to individual applications. MPA applications ought to take advantage of these things to keep attack vectors down while allowing each process in the swarm access to all the functionality it needs. 
Combined with radical MPA, this could be a significant boost: imagine all external input (e.g. from the network) being processed into safe, internal data structures in a process that runs in its own security context. Should something nasty happen, like a crash, it has access to nothing useful.
OpenSSH already went in this direction many years ago with privelege separation, but with modern containerization we could take this to a whole new level that would be of interest to many more applications.
Again, it means having the right frameworks available to make this easy to do.... in conclusionSo we've now seen the common benefits of MPA, the common downsides of MPA and finally some more exotic possibilities which are rarely taken advantage of but would be very beneficial as common patterns.
Next we will look at how to bring this all together and attempt to describe a more perfect system which avoids the negatives and offers as many of the positives as possible.
The ultimate goal is completely robust applications with improved security and flexibility that can work more naturally (from a human's perspective) in multi-device environments and, perhaps, allow us to start tackling the more thorny issues of privacy and social computing.

Categories: FLOSS Project Planets

Final updates to section implementation

Sun, 2014-07-06 07:34

As I have mentioned earlier, sections API is essential part for outliner's implemenation. And now I am ending with basic functionality of it. Indication of sections for multipage documents is fixed. New KoSectionManager is introduced to handle all sections in the document and updating information about them. So here is visible for user changes.

I have picked up better icons for "New section" button and fresh-added "Configure sections" commands:

Rightmost two buttons is "New section" and "Configure sections"That's how "Configure sections" dialog looks:On the left you can see sections tree, and sections parameters on the right part of dialog. Only basic "Section name" parameter is present. I hope future development will lead to a new features will be added, and this dialog will be populated with new items.
Sections changes now are fully integrated with undo menu. You can undo/redo every insertion and renaming of sections:
Now I can start working on outliner. And all this changes are gonna be merged to master after unit-tests will be added.

Categories: FLOSS Project Planets