The search pattern was analyzed, discussed and finally added to the KDE HIG. We outline the concept by means of modified KMail.
Keep on reading: KDE HIG: Search refined
just a quick announcement: KDevelop 4.7.0 Beta 1 was released! Head over to the announcement on the KDevelop website to read more:
Cheers, see you soon with a KDevelop 4.7.0 stable release :)
As I started to port KDEPIM* to KF5, I started to create script perl to help us to port to KF5.
Theses scripts as in git repository “git clone git://anongit.kde.org/kde-dev-scripts” in kf5 directory.
I created new script:
- convert-ksavefile.pl converts KSaveFile to QSaveFile
- convert-ksharedptr.pl convert KSharedPtr to QExplicitlySharedDataPointer
- convert-ksplashscreen.pl converts KSplashScreen to QSplashScreen
- convert-kimageio.pl converts to QImageIO
- convert-kglobal-removeAcceleratorMarker.pl helps to convert KGlobal::removeAcceleratorMarker(…)
- port-kauthactions.pl, it tries to port to new KAuth API
- adapt_knewstuff3_includes.pl, fix new knewstuff3 include path
So now we have 39 scripts to help us to port to KF5.
Becareful compiles each time after apply a script because they are not perfect
Report bugs about them please
PS: When you port a class which can be automate please create a script or send me info to create script. Thanks a lot it will help all kde dev.
Finally, hours of hard work paid off. Kanagram has now a brand new interface which uses QtQuick framework instead of the previous SVG/QPainter based code. The entire interface has been written in QML. The whole process saw a lots of new developments. Previously, Kanagram had multiple interfaces separate for desktop and harmattan devices. There was also a plasma-active interface which had a bit of issues but helped me a lot as a reference, thanks to Laszlo Papp for that. The initial stages included a thorough clean up, which saw all the previous interfaces being replaced by the new qml one. Currently, only a single interface is maintained with the background and other images being kept isolated from the code, so that versatile themes could be implemented for the application in future.
Though there's a lot of room for improvement in the user interface, the backend functionalities work fine. We were successful in pushing these changes to the master before the KDE SC 4.14 freeze. Also, a lot of credit goes to Jeremy Whiting who is now working on porting the application to Qt5. Next I will be working on a wiki link feature about which I'll write in my next blog post.
In 4.13, we moved away from a monolithic Nepomuk based system to a far more decentralized approach. Some parts of this are called Baloo, but to be honest, Baloo is not really responsible for managing your tags.The Nepomuk Days
Back in the era of Nepomuk, every time you would tag a file. 2 things would be done -
- A tag would be created / fetched from the central Database
- The tag would be linked with the file in the same database.
While this approach worked, a massive problem with it, was that all your precious tags were always stored in a central database. Modifying these tags needed this database (virtuoso) to be running, and more importantly, if anything were to happen to this database, all your tags would be lost.
This was one of the reasons why we developed backup and restore solution for Nepomuk. It was not to sure your file index, but rather these tags (and also the ratings and comments).
With Baloo, we are no longer responsible for storing the tags.So then where are the tags stored?
Internally, in the file system each file is typically identified by a unique number. This number then has many attributes associated with it. Some of the standard ones are - file name, permissions, user, group, etc.
Modern file Systems also allow applications to store their own custom attributes called extended attribute or xattr. It’s a fairly common thing. In fact some popular applications such as chromium, curl and wget use them. Run the following command on a file downloaded through Chromium -$ getfattr -d some-file-url # file: some-file-url user.xdg.origin.url="http://bugsfiles.kde.org/attachment.cgi?id=86198"
In KDE Platform, tags are similarly stored in these extended attributes. If you have tagged some file in a KDE Application, you can run the same command to try it out -$ getfattr -d some-file # file: some-file user.xdg.tags="TagA,TagB" Why is this so great?
- Your tags are never lost. No matter what you do to the file on any environment.
- Baloo or another service has no play in tagging. You’re simply talking to the file system.
- We’re using a freedesktop standard standard for comments. This same standard can be expanded for tags. So hopefully, we will be able to share tags among different Desktop Environments.
While this is awesome. It does however, come at a small price. We have no way of knowing which files have which tags unless we keep some kind of index relating the two together. This is where Baloo comes in. Baloo is just the search index. If you disable Baloo, then you can still safely tag files, you just cannot know which all files have been tagged by tag x.
Another small caveat is that the FAT file system does NOT support extended attributes. On those file systems we currently store the tags in a very simple sqlite db. However, with that we loose all the advantages listed above. It also means that the code to read and write tags becomes more complex.
We have been considering dropping tagging support completely in non supported file systems, however, that might just be my pipe dream.
“Without a root icon” -> 35 votes
“With a root icon” -> 15 votes
Here is the final shape of the tree-view after editing the rating icons, and adding the colors into framed squares.
Quite often when people think about implementing an application as a set of processes they are thinking of the benefits as described in my last blog entry only and tend to look past the costs and the technical blockers that exist. This is understandable as the benefits are obvious while the problems are less obvious.
Context SwitchingThe kernel schedules processes, giving each active process a slice of time on the CPU. Every time a process gets scheduled, the data structures representing the state for that process have to be readied in memory and on the CPU. That process is known as context switching.
In a multi-process architecture ("MPA") the processes have to coordinate with each other, usually through message passing of some form, and each time they do so one or more context switches are incurred. In the worst case the processes will acknowledge reception of messages (aka "roundtrips") doubling those context switches.
Even without message passing, each process will get a small time slice of the CPU and then the next process will get time. If the process is performing a task that takes longer than that timeslice (which is measured in milliseconds) it will incur context switches over the course of that task being done. This is true for both I/O and CPU bound tasks.
So, how much does this cost? More than one might expect. Here is an ACM article from 2007 titled "Quantifying The Cost of Context Switch" which provides some decent measurements. The easiest take-away there is that the cost of switching is measured in microseconds, and can take anywhere from a couple microseconds up to over a thousand, putting it in the millisecond range! The reason for this variance is mostly down to CPU cache and TLB misses that are triggered by context switches. This is the true "hidden" cost of context switching.
I recently read one optimization story (which I can't find in my browser history *grrr*) where simply focusing on cache misses incurred by instructions that only accounted for ~20% of the CPU time resulted in an over-all speed up of over 300% for the application. Crazy. Another example can be found in this blog comment where they measured that MySQL was spending nearly 18.75% of CPU time in context switches alone on a busy server; I'd wager most of that is cache related.
MPA applications tend to do a lot of context switching, particularly ones with centralized communication managers such as Akonadi. This turns what would be trivial amounts of CPU time in a single-process application into significant time spent not doing anything useful at all.
Message PassingOne of the more useful things MPAs spend their time doing is passing messages around. How expensive can that be, right? Typically: a lot more than the context switches.
Passing messages around consists of creating and reading the messages, and that time spent marshalling and demarshalling data typically means not just processing time but copying memory around. This is usually more than just the size of the message payload itself as the message itself not only gets held in memory but the data structures that are the source for the message are also in memory.
Some recent work was done in KIO to minimize this effect by skipping interim objects and writing straight to the message buffer. This results in slightly less "pretty" code, but the performance difference was truly significant. Orders of magnitude type significant for listing large local folders. On the receiving side, those data structures are still demarshalled from the data stream, so inefficiency will remain even after this round of improvements are made. If KIO wasn't using a MPA none of that would be necessary.
Then there is when the message is actually sent. In addition to the context switches, more memory is allocated to send that message across whatever "wire" is being used. That memory copying can be mitigated to some extent by using shared memory instead of pipes or sockets, but then one has to handle the synchronization manually which is often non-trivial to get both right and fast.
For data intensive applications, like Akonadi or KIO, those messages are going to be non-trivial in size. To keep things moving along the messages are often chunked, but that just incurs more context switching. Ah, the old latency/interactivity vs throughput problem.
By contrast, in a single-process application the data is simply passed around by pointer or reference. Data copying can usually be easily avoided altogether and context switching due to message passing is completely avoided.Per-process overheadThere is also an inherent cost to each process created in an MPA application. Each process gets its own stack and its own heap allocations. This is measurable, though not hugely significant on modern systems. What is more measurable are things like copies of all non-sharable segments of shared libraries.
Most non-trivial applications pull in a fair number of shared libraries, and a large portion of those are stored in memory shared by all users of the library at runtime. Not everything can be shared, however, and those parts get copied into each process that uses the library. This means that MPA applications often carry a significant per-process overhead due to this alone. In turn, this creates an incentive to keep the number of individual processes down, which means fewer processes than one might want in a perfect theoretical world and to some degree diminishing the benefits of MPA.
(A personal note for future research: Memory that is released may not get immediately reused and just sit there allocated by the kernel's memory manager but unused by the application. This usually takes the form of memory fragmentation. I'm not sure whether this is better or worse in a typical MPA application. I can actually see arguments in both directions and it almost certainly depends on the actual application. I'd love to see this measured, however, using the same working set of data in a single-process and a MPA application.)We built for a single-core worldCode re-use and usage of middleware are two great ways to lower development cost, increase quality (by not rewriting the same code over and over) and ensure consistency between applications. Unfortunately, many of these shared resources were built for a single-core, single-process application world in mind.
Due to this, many MPA applications which could benefit from even more processes (assuming they were otherwise cheap) to spread the load of large tasks into smaller chunks simply can not do so.
It gets significantly worse when we move into the world of graphical applications. Windowing environments tend to assume that a single process (and in the case of x.org: a single thread!) will ever be painting into a given window. Even Wayland assumes this as part of its security model. What this means is that writing an MPA application that has a GUI is typically a non-starter. The answer generally is to create a GUI that is a typical single-process application which talks to a "headless" MPA application.
In turn this means even more message passing (and context switches) than one would normally design into an MPA application due to this artificially imposed barrier (from a data usage perspective).
It gets progressively worse when one actually wants parts of the user interface controlled by multiple processes. The first reason for this is input. Just as drawing to a window has a single-process bias, input is delivered to a single process as well. (In the case of X11, that's a simplification, but the exceptions to this make the situation worse rather than better.) That means more processing to figure out which process to send the input event to, more message passing and even more context switching.
Since that all would happen asynchronously, the complexity of handling the events skyrockets. Events generally must be kept in strict temporal order across the entire application; think of double clicking, for instance. Getting events to the right process when that process might be changing its state at the same time is not just hard, it's almost guaranteed to create bugs and other intractable problems.
Beyond input events, the various processes are going to need to coordinate on things like positioning for layouts (which may affect the size of the components in different processes). Things like the ability to shift where parts of a UI component are shown as Plasma does in the system tray become near impossible. This is why Plasma moved away from a multi-process model (XEmbed windows) to a purely data-centric message passing system (StatusNotifiers, which the rest of the world has since been picking up, starting with Unity which adopted it a few years back now).
This is why Plasma could never be done as a fully MPA application. The windowing systems and requirements of input events and UI component coordination make it a non-starter.We lack the toolsDue to the single-core focus of computing in the past and the rest of the problems above, MPA applications have not been overly common. The more common uses of this pattern, such as in servers handling large numbers of client requests, have little to no need for communication or coordination between the swarm of processes. Probably due to this, we have very few tools that make MPA development easy.
Only in recent years have the obviously-needed tools for things like map/reduce come available in commonly used libraries and languages, and even then it is generally with a focus on use in a single process with multiple threads. MPA is even more exotic and requires used of third party libraries to help with parallelization of this sort, if they even exist at all for many programming languages.
Even more fundamentally, however, the glue that holds MPA applications together as units, namely message passing, is not well provided for at all. There has been an increasing amount of work on things like ZeroMQ and AMQP, but these all tend to increase the number of context switches, leave marshalling/demarshalling (and all the rest of any application protocol) up to the application itself or both.
Then there is the question of managing the swarm of processes. What happens when one crashes? What happens if another process has a dependency on that crashing process? What about when a process is not doing anything useful anymore? These tasks, known as process supervision, are rarely provided for but absolutely critical for getting the most out of MPA designs.
There is just a hell of a lot of work left to the developer writing an MPA application that, were they more common, would probably be language features or at the very least be in widely used libraries. So it should be no surprise that most MPA applications tend to fall short in one or more ways for which there are known solutions for.Is single-process so much better at these things?It is also possible to have a CPU-cache-hating single process application, or spend too much time passing messages to middleware, or relying on threads for interaction or performance improvements and failing hard in the process (because threads are hard). Yet there are far more tools, far more language features, far more code to copy-and-paste be inspired by out there for single-process applications.
Cache misses can be measured with tools in the Valgrind suite (for example) and often improved on, while in the MPA case you just have to live with them. So yes, you can write a horrible single-process app, but it is both possible and a lot easier to get it right.
... and of course there are all those bits and pieces that enforce a single-process model, like windowing systems.
In sum this is why things like node.js are single process and instead focus on non-blocking I/O and event driven designs ... and why despite that simplistic model manage higher throughput and are more robust than other approaches that are taken.
So, yes, single process is really so much better at these things.
Which is sad, really, because there are lost opportunities beyond the obvious ones I laid out in the previous blog entry on why MPA rocks. Those reasons are the obvious ones, but there are even more interesting and perhaps less-obvious things that MPA can bring and those are missed opportunities of epic proportion.
It's sad because while single-process, non-blocking-I/O might give node.js good throughput and a simplistic design (good for quality!) it also results in systems that are very hard to develop for (at least with reasonable quality and maintainability) and which have significant drawbacks like real limits on scalability and unpredictable latency.
It's also quite sad because for all the problems above there are known solutions. Better than that, there are implementations of most (all?) of those solutions that are battle tested. Most (all?) are far from perfect, probably because MPA has remained fairly niche given the above, but the problems are not only solvable but they have been solved and we aren't taking advantage of that.
The next few blog entries will cover those aspects: the lost opportunities and the unused solutions. Then hopefully I can get to pulling this all full circle back to the topic of "after convergence" and start discussing social networking and better desktop and mobile applications.
Did you know that for the 5th year, KDE is planning a Developer Sprint which is going to be held in Randa, Switzerland from the 9th to 15th of August this year?
Let me tell you a few words about it if you don’t know what is all about. These sprints are mostly the only times when KDE developers gather under the same roof to work on specific parts of our beloved open-source organization. These sprints are a great opportunity to plan and design new features as well as to hack on them. If you are interested, you can find more about previous sprints here.
But things cannot be as accessible as we might want. As you have probably guessed already, KDE needs money in order to make possible these meetings, since it is a non-profit organization which gained its popularity (and raises it day by day) owing to its great contributors and its great users. These contributors donate their time to help improve the software you love, so if you are a dedicated KDE user or if you are simply enthusiastic about free and open-source software you can support the Randa Meetings by making a donation here. Thank you! :)
Remember that wonderful ‘bird out of cage’ feeling FOSS gives you.
A couple of days ago I received an email from Cristian, with two screenshots of KMyMoney running on KDE Frameworks 5.
There’s still lots to be done, but it’s a good start.
One thing where we’ll need help is getting a Qt5 version of KDChart. That’s what we use for charts in our reports. Calligra uses it too, so hopefully we’ll see something come up soon.
Cristian will be attending the Randa meeting in August, to work further on the port. Please, help make the sprint happen, donate to the fundraising and spread the word about it.
Jungle, among other things, are going to be heavily discussed during the 2014 Randa Meetings. If you would like to help out, you can donate to our fundraiser to help pay for the costs of these meetings.
In this blog entry I'm going to quickly cover why dividing a single application into multiple separate processes is a terrific idea in theory. The next entry will focus on why it sucks in practice. I had written an entry that covered both sides of the topic some weeks ago but all I managed to come away from that exercise with was this: "It's too big a topic." :) So ... two blog entries ... and I promise they actually meet up with all the previous entries in the "after convergence" series. There is method to this here madness, I say!
So .. what is a multi-process architecture? Let's quickly look at four examples: XEmbed based system tray, KIO, Google Chrome and Akonadi.
The old style system tray icons seen in X11 desktop environments were actually out-of-process windows. There was an X11 based protocol for an application to make an X window and then say "this window is meant to be shown in the system tray". It didn't matter what toolkit the application was written in and any one icon blocking on processing or I/O would not interfere at all with the rest of the system tray. If one application failed, only its icon would disappear and the rest of the system tray would continue on. Clever!
KIO is file system access library that is part of KDE Frameworks. It dates back to 2.0 which makes it nearly 14 years old as of this writing. Applications request actions on a given URL such as listing, copying, moving, deleting, reading, etc. KIO handles those requests using "KIO slaves" which are really protocol implementations. There is one for local files, one for http, one for ftp, one for files-over-ssh, visible network shares, etc. etc. The interesting bit for us here is that the "slave" runs in its own process. So the application, be it Konqueror or Dolphin or Kate, asks KIO for some action on a URL and KIO spawns an external process that does the actual work. The external process does the work and then forwards it on to the application. The benefits include:
- protocol implementations can be written as "dumb" blocking I/O managers, leading to simpler code and therefore less bugs
- all protocol implementations are available to all users of KIO allowing entirely new data access protocols to be added to all KIO-using applications instantly
- non-blocking on the application side; so e.g. while Dolphin lists a HUUUUGE directory the UI can remain responsive
- they can be shared between processes, allowing (in theory) more agresive caching of results
- if one protocol crashes, the application doesn't crash
Akonadi is a personal information data cache. It provides a generic API for accessing all sorts of rather disparate types of data such as email, calendaring, contacts, todo lists, blog entries, ... Each account you set up is managed by a separate process that is overseen by a central Akonadi process which does nothing else but the rather safe job of orchestration; safe enough that "no crashing" is pretty much guaranteed. If one account handler crashes, the entire system keeps running. New types of accounts (and types of data!) can be added simply by installing a new Akonadi handler, making Akonadi broadly general. Even filtering of local mail and indexing content is done using an Akonadi process, which means these things can happen in parallel without blocking each other. Clever!
While these four examples do rather different things, we can start to see a common pattern:
- less application GUI blocking (better user experience)
- easy concurrency (more performance)
- the implementation itself can be completely blocking, leading to simpler (and therefore less buggy and easier to maintain) code
- easily extended (much like application plugins)
Why that is so is the topic of the next entry ...
Some changes took place to the GUI, I decided to separate the Tags and labels tabs into two left sidebar widgets like this:
There was some wrong XMLs sent to the KIOSlave and the mistake was fixed, and now the “Labels Tree-view” and “Advanced Search Window” results are identical.
Next task is to patch the AlbumHistory mechanism to handle the new Labels tree-view history, But first, Some bugs should be fixed in the mechanism itself, I hope it won’t take so much time.
Would you like to see this new tree-view without a root icon like the above image or with a root icon like this:
Hey KDE people! I'm Claudio and, again, I'm the only student doing GSoC and improving Gluon Project. This does mean there's a lot to do but often this translates in a lot of fun.
My project basically consists in mantaining the Gluon Player and all the distribution service in general from the server to player library that handles OCS requests to the actual QML client. This meant in porting the Qt4 player to Qt5, which led to a partial rewrite and rearchitecturing After the porting I started implementing "friends" features. This means that YOU, with a Gluon account, can ask an other Gluon user for friendship and he can accept. This is the basis of the social features we're introducing.
Oh and did you know we have a working partial and opensource implementation of the OCS protocol? I really care that you know this because when I started writing it, there was no open server-side implementation and I had to start writing it. As for now, the server supports CONTENT, PERSON, FRIEND, FAN, COMMENT and CONFIG with some calls not exaclty following the standard (nested comments feature missing). And this server also is tested with our beloved libAttica. I'll publish the source as soon as I can in a personal scratch repo for now, but I'd like to move it in the KDE infrastructure (admins?).
Ah, and I'll spend the next time of my project implementing a xmpp-based realtime chat between users. And also activity streams and other social features.
What is really missing to Gluon then?
Unfortunately I'm just focusing on distribution and user parts, but being Gluon a platform for creating, playing and deploying games, also Creator and Engine needs attention, care and most of all port to Qt5. Our idea would be to give you a usable release (oh, I really want to do my Gluon Konqui game). In order to get this possible please donate to the Randa Meeting, where gluon developers will meet :)
See you in days!
While Elon Musk has committed himself to the task of putting a human in Mars in the next couple of years, KDE has set itself to a similar goal in a shorter scale: putting a (large) bunch of KDE developers, including me, in Randa, Switzerland for the traditional Randa Meetings.
While this is my first post in the KDE planet in a long, long while, I haven’t been entirely away from KDE development (although my Masters has taken a large chunk of my available time for FLOSS contributions) and I wanted to spend my (northern hemisphere) Summer developing some of the ideas I had in mind for KDE software and KDE Edu in particular. Unfortunately, Akademy this year will be held during my classes, so I decided to attend the Randa Meetings instead. There, I expect to work with my GSoC student to finalize the project we have around Kig, finish up the migration of Kig to KDE Frameworks which I started a couple of months ago, and prototype a new educational software called Workbook which I will explain in a separate post.
And like me, many other developers in KDE are attending the Randa Meetings this year with different plans and projects but with the common goal of improving KDE and Free Software in general. But this means we need to put real money to make this meeting happen, and this is where you can help: you still have a couple of days to donate to our fundraiser to pay for the costs of these meetings. Help us spread the word and reach our funding goal!
First of all, I’m very happy to say that I’ve passed the GSoC midterm evaluation! I can now continue my work on the QML/JS language support plugin of KDevelop for the rest of the summer! By the way, here is a new blog post listing many new features and bug fixes. Most of my work of the past days consisted of fixing nasty bugs (some of them took several days to fix), but I’ve also implemented nice things that deserve screenshots.QML enhancements
As explained in a previous blog post, QML component instances inherit from an anonymous class. The anonymous base class of QML component instances has no name, and was displayed as <class>. Now, this useless information is replaced by the actual QML component from which the QML object inherits.
Several missing QML components have been added, and the most commonly used “bound properties” of them have been added. These components and properties can be found in the Qt documentation but are not described in the plugin.qmltypes files shipped with Qt. I therefore had to manually edit them in order to add the missing information. Now that this is done, the famous Component.onCompleted binding is recognized.
The last QML enhancement of the past days is the ranking of code-completion proposals (this is not yet in the Git repository but will land very shortly). When assigning a value to a variable, or comparing a variable with a value, or binding a value to a QML property, the QML/JS plugin will show you which code-completion proposals have the right type (or are a function having the right return type):
Classes are highlighted in light green because they might also end up being of the correct type: assigning a bool to an int is incorrect, but a class may have a method or an attribute of the correct type, so highlighting classes allows the user to distinguish them from variables having a completely wrong type.User interface improvements
A KDevelop language plugins mainly consist of a Definition-Use Chain builder (the thing that tells KDevelop where variables/classes/functions/methods are declared and used, and which types they have) and a code-completion infrastructure (that auto-completes variable names, import statements, etc), but such plugins can also customize the way KDevelop looks and interacts with the user.
The QML/JS plugin does not change much of KDevelop, but provides a very nice feature: small widgets that appear when the user edits a property where a helper widget may help. There is a color picker widget, one that can be used to choose the size of a font, another one that helps visualize margins and spacing, etc. These widgets are currently quite limited because KDevelop depends on Qt4 and cannot use the wonderful QtQuick.Controls module, but as soon as KDevelop becomes Qt5-based, there will be widgets for choosing font families and previewing animation easing types.
Oh, and the most interesting point about these widgets is that they are implemented in QML, and that they are fully supported by the QML/JS plugin! I was able to edit them entirely in KDevelop and the experience was very great: proper syntax highlighting, complete code-completion, types inferred everywhere, modules listed, etc. There are still features that are missing, like hints for function calls (showing the signature of the function you are calling as you type) but the plugin is already in a pretty good shape.Named module imports
The color picker described in the above section is based on the ColorPicker component of Plasma. I simply took it and back-ported it to QtQuick 1.0. When I first read the source-code of this component, I discovered a new import syntax: import X as Y. This was not yet supported by KDevelop, so I implemented support for that. It took me two or three days and required several changes all over the place, but it finally worked!
The above screenshot shows that named modules are recognized by the code-completion infrastructure, and this screenshot shows that components declared inside those modules can be used in the code:I’m going to Randa!
This is a typical situation when you program a GUI for an application. You have just created a new control, you start your application and… no control. But now: Is it obscured? Is it misplaced? Is it completely transparent or set to ‘invisible’? Is my custom OpenGL-stuff broken or is the item actually not created for some reason? Checking all these cases manually can be a time-consuming and exhausting task.
“Come on, it’s got to be easier than this!”
In the web-development world people are used to tools like Firebug or the Chrome developer tools, which allow you to debug websites, showing you the HTML tree with CSS properties and highlighting selected elements. They cover exactly this use case.
In the Qt-world we have GammaRay, the Qt introspection tool, to make our lives easier in regard to real-time analysis of applications. Just as you can introspect the skeleton of a human with an X-ray unit, so you can introspect the skeleton of Qt applications with GammaRay. GammaRay is able to show you the internal tree of QObjects and their status and even allows you to manipulate the application’s state in real time.
One of the many handy tools GammaRay provides is a module for analysing QWidget based GUIs. You can select widgets, check their position and visibility, etc. A similar tool exists for GUIs based on QGraphicsView. The recent version 2.1 release of GammaRay completes the collection of GUI-tools with a tool for inspecting QtQuick2 applications.
Let’s check how it will help us track down all the possible causes for the absence of our button…“Could it be that it doesn’t exist at all?”
The most basic feature of GammaRay’s Quick inspector is the tree view of all QQuickItems in the item tree. You will discover it on the left side of the UI. Apart from answering the most basic question “Which items do exist?” and providing navigation, it will already provide you with some additional information:
- “which items are actually visible?“ (items with visible == false will be greyed out)
- “which item has focus?” (More on that later on)
- it will warn you if it suspects something is wrong with the items (e.g. Items that are outside the visible area, but still have visible == true are either misplaced or unnecessarily slow down rendering)
- it will discreetly highlight items that receive events (you will discover a subtle purple text color, fading back to black after you e.g. click on a mouse area)
If you have used GammaRay before, this one will be familar to you. Right to the item-tree there is a detail-view, providing information about the state of the item. Just as in the object inspector, you can:
- see all the properties with their values (hint: since 2.1 you can right-click QObject* properties to get a detailed view in the appropriate tool) and change them inline
- inspect signal-slot connections and log signal-emissions
- directly invoke slots and Q_INVOKABLE methods
- check Qt-known enums and
- class-info data
Any changes you make to properties in GammaRay will directly influence the original app, so you can easily turn a green rectangle red or set an item invisible to check what’s below it.“Might it be size ‘zero’ or misplaced?”
The Quick inspector tool has a live-preview of the QtQuick scene inside the GammaRay window. Here you can:
- zoom in to a specific area and measure distances in the scene in order to achieve pixel-perfect UIs.
- send mouse and key events to the app, so you can control it remotely. This especially comes in handy, if you need to debug a device, that does not lie directly besides you.
- activate some cool hidden debugging features, QtQuick ships by default (more on that later on).
- easily check the geometry and positioning of an item.
The last-mentioned feature is what you need to answer our question if the geometry is right. If you select an item in the item-tree above, the preview will be annotated with information about geometry and layouting of that selected item. The item’s bounding rect is highlighted in red. Anchor-layouts, including margins and offsets or x/y offset, are displayed graphically, depending on whatever is used for layouting. Transformed items will additionally have the original size and position displayed as well as the transformation origin.“Is it possibly obscured?”
GammaRay has quite some helpful tools for debugging QtQuick applications. But it is not the only one. QtQuick itself already provides some very cool stuff for debugging, though most of it is quite hidden.
Since Qt 5.3, one cool feature the QtQuick scenegraph has, is visualizations for various things directly inside the application using the environment variable QSG_VISUALIZE set to either “clip”, “overdraw”, “batches” or “changes”. One of them (overdraw) helps to spot items that are completely or partly obscured by other items.
It does not make sense for GammaRay to reinvent the wheel here and recreate the same features, but GammaRay can make them available and even enhance them by making them triggerable in real-time. The toolbar in the top-left corner of the preview contains icons to en- and disable the visualize$STH render modes dynamically. See the link for more information about this scenegraph-feature.“Or is the scenegraph or OpenGL stuff of my custom item broken?”
When writing complex, fancy QtQuick applications, you might at some time come to the point, where you need a custom item type, with its own geometry, textures and shaders. QtQuick provides tremendous flexibility here, while still hiding it behind a very nice API, in case you don’t need it. Everything you can do with OpenGL is possible.
With all the possibilities of OpenGL comes all possibilities of mistakes. In fact on the OpenGL side almost the same questions reappear: Does the scenegraph node exist? Are the properties correctly set? Is the geometry correct? What do the shaders do?
While being a full OpenGL debugger is out of scope for GammaRay, (there’s apitrace and vogl for that), providing some high-level insights into the scenegraph nodes and their OpenGL properties is not. The tree view of items mentioned earlier has two tabs: “Items” and “Scene Graph”. “Items” shows the QQuickItems, as described above, “Scene Graph” shows the graph, that is generated by the QtQuick engine. Each item provides one or more scene graph nodes, that hold a transformation matrix, an opacity value, a geometry, textures and/or shaders. These nodes are then composed by the QtQuick renderer to a scene, that gets passed to the GPU for rendering.
As with items, you can use it to see if all nodes exist and check their basic properties. As soon as you select a Geometry Node, you will however discover two new tabs in the property-view:
- Geometry: This tab displays a table of the raw OpenGL vertex data and uses the vertex coordinates as wireframe. You can check the vertex-positions and check the properties they will pass to the vertex-shader.
- Shaders: This tab simply offers you the possibility to have a look at the shaders. It basically allows you to check whether the shaders are correctly set.
We already have a lot of ideas (e.g. list uploaded textures and set them in context, display in and output-values for the shaders, etc.), but this first batch of features already helps track down quite a lot of common errors.Focus debugging – “Where does my key-press get stuck?”
Finally the GammaRay QtQuick inspector is not just all about finding missing buttons. Another common source of problems in the context of QtQuick, where tools have been missing so far, is focus. You have added a text edit, have set focus: true but when you run it, the text you enter arrives nowhere. You will want to know: “My item should have focus. Why doesn’t it have the active focus?” (hint: there’s a difference between focus and active focus) “Which item does have the active focus?” and finally “Where does my key press actually arrive?” (this is the active focus item in the first place, but if that doesn’t accept it, it will be passed on to its parent).
If you take a deeper look at the Item tree, you will find that some items have more than one small icon besides them, like a warning-icon or spotlight icons. While the warning icon states that there might be something wrong with this item (as mentioned earlier), the spotlight icons are the important bits for focus debugging. Basically the spotlight marks items, which have focus (focus == true). The turned-on spotlight icon marks the one item that has active focus (activeFocus == true) and thus actually will receive key events, while the turned-off spotlight marks items that have focus but no active-focus (like it has focus set to true but is on a hidden tab or on an inactive window and thus won’t receive events). These icons answer the first two question.
The last question is answered by another feature I mentioned in the context of the item tree view. It’s the small purple text highlighted on the items in the tree, when it receives an event. The color of the text turns to purple and fades back. This highlight is triggered on any kind of event, including mouse and key events. In this way, you can easily trace, which items receive your key-press.
Stop digging, start hacking again!
The post Analysing QtQuick apps with GammaRay or “Why is my button gone?” appeared first on KDAB.
Post in French, English translation below…
Dans quelques jours débuteront les 15em Rencontres Mondiales du Logiciel Libre à Montpellier, du 5 au 11 Juillet.
Ces rencontres débuteront par un week-end grand public dans le Village du Libre, dans lequel nous aurons un stand de démonstration des logiciels de la communauté KDE.
Ensuite durant toute la semaine se tiendront des conférences sur différents thèmes, la programmation complète se trouve ici . J’aurai le plaisir de présenter une conférence sur les nouveautés récentes concernant les logiciels libre pour l’animation 2D, programmée le Jeudi à 10h30, et suivie par un atelier de crétion libre sur le logiciel de dessin Krita de 14h à 17h.
Passez nous voir au stand KDE ou profiter des conférences et ateliers si vous êtes dans le coin!
En passant, un petit rappel pour deux campagnes importantes de financement participatif:
-Le Kickstarter pour booster le dévelopment de la prochaine version de Krita vient de passer le premier palier d’objectif! Il reste maintenant 9 jours pour atteindre le second palier qui nous permettrai d’embaucher Sven avec Dmitry pour les 6 prochains mois.
-La campagne pour financer le Randa Meeting 2014, réunion permettant aux contributeurs de projets phares de la communauté KDE de concentrer leurs efforts. Celle ci se termine dans 8 jours.
Pensez donc si ce n’est déjà fait à soutenir ces projets
In a few days will begin the 15th “Rencontres Moniales du Logiciel Libre” in Montpellier, from 5th to 11th of July. This event will begin with a week-end for general publicaudience at the “Village du Libre”, where we will have a KDE stand to show the cool software from our community.
Then for the whole week there will be some conferences about several topics, the full schedule is here. I’ll have the pleasure to present a talk about recent news on free software for 2D animation on thursday 10.30 am, followed by a workshop about free creation on Krita painting software from 2 to 5 pm.
Come say hello at the KDE stand or enjoy the conferences and workshop if you’re around!
On a side note, a little reminder for two crowdfunding campaign:
-The Kickstarter to boost Krita development just reached the first step today! We now have 9 days left to reach the next step that will allow us to hire Sven together with Dmitry for the next 6 months.
-The Randa Meeting 2014 campaign, this meeting will allow contributors from key KDE projects to gather and get even more productive than usual.
So think about helping those projects if ou haven’t already
With 518 backers and 15,157 euros, we've passed the target goal and we're 100% funded. That means that Dmitry can work on Krita for the next six months, adding a dozen hot new features and improvements to Krita. We're not done with the kickstarter, though, there are still eight days to go! And any extra funding will go straight into Krita development as well. If we reach the 30,000 euro level, we'll be able to fund Sven Langkamp as well, and that will double the number of features we can work on for Krita 2.9.
And then there's the super-stretch goal... We already have a basic package for OSX, but it needs some really heavy development. It currently only runs on OSX 10.9 Mavericks, krita only seees 1GB of memory, there are OpenGL issues, there GUI issues, there are missing dependencies, missing brush engines. Lots of work to be done. But we've proven now that this goal is attainable, so please help us get there!
It would be really cool to be able to release the next version of Krita for Linux, Windows and OSX, wouldn't it :-)
And now it's also possible to select your reward and use Paypal -- which Kickstarter still doesn't offer.Reward Selection Donate €5,00 EUR Postcard €15,00 EUR Postcard and stickers €25,00 EUR Digital Download of the Muses DVD €50,00 EUR Physical copy of the Muses DVD €75,00 EUR USB Stick with Krita €100,00 EUR Comics with Krita DVD plus signed comic €150,00 EUR Dedicated Tutorial by Wolthera €250,00 EUR Pick your own priority €750,00 EUR A month of dedicated development €2.500,00 EUR
Today, after a long period of hard work and preparation, having deemed the existing WebODF codebase stable enough for everyday use and for integration into other projects, we have tagged the v0.5.0 release and published an announcement on the project website.
Some of the features that this article will talk about have already made their way into various other projects a long time ago, most notably ownCloud Documents and ViewerJS. Such features will have been mentioned before in other posts, but this one talks about what is new since the last release.
The products that have been released as ‘supported’ are:
- The WebODF library
- A TextEditor component
- Firefox extension
WebODF has had, for a long time, an Editor application. This was until now not a feature ‘supported’ to the general public, but was simply available in the master branch of the git repo. We worked over the months with ownCloud to understand how such an editor would be integrated within a larger product, and then based on our own experimentation for a couple of awesome-new to-be-announced products, designed an API for it.
As a result, the new “Wodo” Editor Components are a family of APIs that let you embed an editor into your own application. The demo editor is a reference implementation that uses the Wodo.TextEditor component.
There are two major components in WebODF right now:
- Wodo.TextEditor provides for straightforward local-user text editing,by providing methods for opening and saving documents. The example implementation runs 100% client-side, in which you can open a local file directly in the editor without uploading it anywhere, edit it, and save it right back to the filesystem. No extra permissions required.
- Wodo.CollabTextEditor lets you specify a session backend that communicates with a server and relays operations. If your application wants collaborative editing, you would use this Editor API. The use-cases and implementation details being significantly more complex than the Wodo.TextEditor component, this is not a ‘supported’ part of the v0.5.0 release, but will, I’m sure, be in the next release(s) very soon. We are still figuring out the best possible API it could provide, while not tying it to any specific flavor of backend. There is a collabeditor example in WebODF master, which can work with an ownCloud-like HTTP request polling backend.
These provide options to configure the editor to switch on/off certain features.
Of course, we wholeheartedly recommend that people play with both components, build great things, and give us lots of feedback and/or Pull Requests. :)New features
Notable new features that WebODF now has include:
- SVG Selections. It is impossible to have multiple selections in the same window in most modern browsers. This is an important requirement for collaborative editing, i.e., the ability to see other people’s selections in their respective authorship colors. For this, we had to implement our own text selection mechanism, without totally relying on browser-provided APIs.
Selections are now smartly computed using dimensions of elements in a given text range, and are drawn as SVG polygon overlays, affording numerous ways to style them using CSS, including in author colors. :)
- Touch support:
- Pinch-to-zoom was a feature requested by ownCloud, and is now implemented in WebODF. This was fairly non-trivial to do, considering that no help from touch browsers’ native pinch/zoom/pan implementations could be taken because that would only operate on the whole window. With this release, the document canvas will transform with your pinch events.
- Another important highlight is the implementation of touch selections, necessitated by the fact that native touch selections provided by the mobile versions of Safari, Firefox, and Chrome all behave differently and do not work well enough for tasks which require precision, like document editing. This is activated by long-pressing with a finger on a word, following which the word gets a selection with draggable handles at each end.
- More collaborative features. We added OT (Operation Transformations) for more new editing operations, and filled in all the gaps in the current OT Matrix. This means that previously there were some cases when certain pairs of simultaneous edits by different clients would lead to unpredictable outcomes and/or invalid convergence. This is now fixed, and all enabled operations transform correctly against each other (verified by lots of new unit tests). Newly enabled editing features in collaborative mode now include paragraph alignment and indent/outdent.
and type in your own language (IBUS is great at transliteration!)
- Benchmarking. Again thanks to peitschie, WebODF now has benchmarks for various important/frequent edit types.
Edit Controllers. Unlike the previous release when the editor had to specifically generate various operations to perform edits, WebODF now provides certain classes called Controllers. A Controller provides methods to perform certain kinds of edit ‘actions’ that may be decomposed into a sequence smaller ‘atomic’ collaborative operations. For example, the TextController interface provides a removeCurrentSelection method. If the selection is across several paragraphs, this method will decompose this edit into a complex sequence of 3 kinds of operations: RemoveText, MergeParagraph, and SetParagraphStyle. Larger edits described by smaller operations is a great design, because then you only have to write OT for very simple operations, and complex edit actions all collaboratively resolve themselves to the same state on each client. The added benefit is that users of the library have a simpler API to deal with.
We now have some very powerful operations available in WebODF. As a consequence, it should now be possible for new developers to rapidly implement new editing features, because the most significant OT infrastructure is already in place. Adding support for text/background coloring, subscript/superscript, etc should simply be a matter of writing the relevant toolbar widgets. :) I expect to see some rapid growth in user-facing features from this point onwards.A Qt Editor
Thanks to the new Components and Controllers APIs, it is now possible to write native editor applications that embed WebODF as a canvas, and provide the editor UI as native Qt widgets. And work on this has started! The NLnet Foundation has funded work on writing just such an editor that works with Blink, an amazing open source SIP communication client that is cross-platform and provides video/audio conferencing and chat.
To fulfill that, Arjen Hiemstra at KO has started work on a native editor using Qt widgets, that embeds WebODF and works with Blink! Operations will be relayed using XMPP.
Other future tasks include:
- Migrating the editor from Dojo widgets to the Closure Library, to allow more flexibility with styling and integration into larger applications.
- Image manipulation operations.
- OT for annotations and hyperlinks.
- A split-screen collaborative editing demo for easy testing.
- Pagination support.
- Operations to manipulate tables.
- Liberating users from Google’s claws cloud. :)
If you like a challenge and would like to make a difference, have a go at WebODF. :)