Inotify is a subsystem of the Linux kernl monitoring file systems and reporting changes to user space appliactions. In a shell like Bash, the tool inotifywait can be used to call scripts or other actions based on Inotify events.
The Kernel subsystem Inotfiy monitors file systems for events like access to files, changes, creations, deletions. User space programs like a desktop search engine can connect to Inotify, and ask it to send reports about the event X on file Y. If the event takes place, Inotify reports to the user space program, which for example re-indexes the file again.
For shell environments tools are already available to do talk to Inotify, settings watchers for certain files and directories, and receiving the reports: inotify-tools.
The inotify-tools contain two main programs: inotifywait and inotifywatch. The second one, inotifywatch, simply gathers statistics about file events. Much more interesting though is the first tool, inotfiywait. It asks Inotify to watch files and directories (even recursively) and it can call arbitrary actions on various Inotify events. The tool also offers the usual command line flags to make live easier: formatting strings for the output format, possibility to run as a daemon, read in file list from a file, write events to a file, exclude patterns, and so on.
For example: imagine you need to review a huge LaTeX script, thus occasionally have to change some smaller things but do not want to fire up “make” each time you changed something. If you are lazy, you can simply use inotifywait to monitor the directory containing your LaTeX files and calling make each time something changes:$ while true; do inotifywait -r -e modify --exclude=".swp" . && make; done Setting up watches. Beware: since -r was given, this may take a while! Watches established. ./mainmatter/ MODIFY File15.tex latexmk -pdf -r ./pdflatexmkrc Latexmk: This is Latexmk, John Collins, 11 Nov. 2012, version: 4.35. **** Report bugs etc to John Collins <collins at phys.psu.edu>. **** Latexmk: applying rule 'pdflatex'... Rule 'pdflatex': File changes, etc: Changed files, or newly in use since previous run(s): 'mainmatter/File15.tex' ------------ Run number 1 of rule 'pdflatex' ------------
In the above example inotifywait is called with the flag -e modify which restricts it to only monitor modifying events. Also temporary vim files are excluded and thus ignored. The watches are set up, the program is waiting – until Inotify reports the event “MODIFY” on “File15.tex” in the sub-directory “mainmatter”. Afterwards, “make” was called, which again launched “latexmk”. Also the while loop ensures that after a change inotifywait is called again and thus monitors the files and directories.
Summarizing I can say inotifywait is an easy and quite usable approach to access the Inotify subsystem in simple and daily shell scripts and tasks. It makes life easier without making things complicated.
If you know people speaking German who are interested in this topic, I also wrote a German version of this howto for my employer’s blog.
Filed under: Debian, Fedora, HowTo, Linux, Monitoring, Security, Shell, Technology, Ubuntu
Exciting times. I am working on the lower-right corner of the desktop right now, and thought I could give a quick visual update of progress there, as well as some sense of direction where we’re heading, how the user interface evolves, Plasma’s new architecture, and underlying software stack and the device spectrum. A whole mouthful, so grab a cup of tea.
Two areas in particular are catching my attention these days, the notification area (“system tray”, that row of icons which show you the status of all kinds of things), and the clock with its calendar popup. The calendar shows quite nicely some things we want to pay attention to in Plasma 2: consistency and elegance. We are making more use of pronounced typography, are fixing alignment problems, and are looking for a generally more elegant way of presenting the common functionality. In that, the plan is not to make a lot of changes to the functionality itself, not cutting down the UI, but polishing what is already there. By reducing the workspace’s mental friction for the user makes the tools take a step back and give more room to the content and interaction that is presented. Doing that, the workspace should be functional, yet elegant. The migration should feel like an upgrade of something familiar. We want to make it functionality-wise equivalent, but more polished. On top of that, we’re readying the technology for future use cases, and evolution of the underlying technology stack.
QtQuick 2 actually makes these things a lot easier, as it is much more correct in terms of calculating font metrics reliably, which we need to (sub-)pixel-perfectly align text and other UI elements. Trying to make this exact in Qt4 and on top of QGraphicsView was a shortcut into madness. Ever so slightly off font metrics, and wonky layouts get you to tear your hair out pretty quicky. This is much better now, (though certainly not perfect in all areas), so it allows us to finally fix these jarring little mis-alignments that nag the eye. The calendar already does it pretty well, and serves as a nice example. This implementation takes the physical size of the pixel on the screen into account by correcting for DPI in the whole layout, so it works nicely on all resolutions and pixel densities. With higher pixel-density displays, the rendering gets more details, fonts look neater, but the size of interaction areas, and the effective size on the screen don’t change much. The screenshots have been taken on a 170 DPI display, so if the fonts seem huge on your display (or small, as I hope for you), this would be the reason for that.
In the notification area, you might notice that the widgets that have been living in there are now contained in the same popup. This results in less window management and layering of small popups in the notification area, clearer navigation and a cleaner look. The currently active icon has slightly pronounced visuals compared to the others.
The calendar will of course show information about tasks and agenda (this part doesn’t work yet). One neat thing which the new architecture allowed use to do very easily is lazy-loading the calendar. As just loading the calendar can result in quite a bit of loading underneath, delaying to loading it on-demand speeds up start-up time and lowers memory consumption in a lot of cases.
Plasma 2 is a converging user interface. It allows to control the Look and Feel at different levels. On the lower level / higher detail side of the scale, we look at adjusting input areas, sizing of ui elements, and interaction schemes by swapping out and “overriding” parts of the UI toolkit with, for example, touch friendly versions of widgets. On a higher level, we support laying out the furniture of the workspace (think of alt+tab window switcher, log in, lock, etc. features) by more code sharing and a different logic to how they’re loaded (and swapped out). Plasma shell allows dynamic switching of shell layouts at run-time. The shell is equipped to morph between workspace presentations, so should your tablet suddenly get a keyboard and mouse plugged in, it changes its disguise to a traditional (actually your own customized) desktop layout. While this is cool, in reverse, it allows us to isolate changes that are done to suit other devices from the “Desktop Experience”. Just because the workspace support multiple devices, the user doesn’t get the lowest common denominator, but specialization. Different devices are different for a reason, and so should the UI be. “Mobile-friendly” shouldn’t mean “misses features”, but “responsibly designed” (excuse the pun).
Our tricks allow to use the same system on a range of devices, with the user interface adopting to specialties of hardware it runs on, such as input, display, but also usage scenarios specific that device. Much like the Linux kernel, which “mostly figures out how to run properly in a device it’s booted on”, and which can be configured as small and as big as one wants, the user-interface uses “UI plugins” at different layers, and detects changes and adopts to the form factor. You use a media center “driver” if you want to use it on the TV in your living room, you use a tablet “driver” on your tablet on the go, you use desktop driver on the laptop, and you can switch the device’s UI when needed. Laptop or tablet + HDMI cable ~= media center, isn’t it?
You see, many different construction sites. We’re working a lot on all these things and you should definitely consider joining. Nothing is set in stone yet, and you should consider the imagery functional mock-ups, rather than the final thing. It’s not perfect and lacking in all kinds of places, it even crashes in some and what is presented is just a snapshot of work in progress. Many details remain to be hashed out. Still, I’m running Plasma 2 about half of the time on my laptop now. It’s just about to be becoming usable and almost dogfoodable for more than just a very small handful of people with an elevated pain threshold and a debugger at hand.
“When?”, I hear you ask. We’re aiming at a stable, end-user ready release of the new Desktop shell in summer 2014, at the end of Q2. On of the next milestones will be a tech-preview, which is planned for mid-December, so just about a month away from today. From December, where we’ll reach the point of having the basic functionality in place, we’ll spend time on filling in the missing bits, making sure everything runs on Wayland, and on polishing and quality improvements. Integrating additional workspaces, such as Plasma Active and Plasma MediaCenter are also next year’s roadmap. These will become the tablet, resp. media center drivers.
I’ve often been missing an easy way to browse through audio sample files. A few composition software have one embeded, but not all. And sometimes I just want to check a sample pack quickly, without opening a big complete DAW application.
As I was looking for an exercise to practice more QML and remembered this, I started writing this little application: Sample Explorer QML
It is a very simple kind a music player, but with interactions designed specially for one precise use case: to browse through a collection of audio sample files.
You may wonder how it is different from a classic music player, but try to open a collection of drum samples (or other very short samples of this kind) in a playlist and see how it’s not convenient for this purpose..
So unlike a regular music player, it doesn’t play all files in the list, only the selected one.
Also it auto-plays when you select a new file.
Now I can easily browse through big sample folders to quickly find what I need.
And as I thought it might be useful for someone else too, I shared it on gitorious:
I added the QML word in the name as for now it’s a pure QML application, using only core QML types. It was fun to see how far I could go with it, as it already provided all the components I needed for this application.
Maybe later I’ll add some more fancy features by adding some C++ in the mix (like a spectrogram view or other kinds of analyzers..), but for now it already does its main job.
I’ve been trying very hard to get this in a shape where i can really show off screenshots. And before the next pim sprint (which starts in just 2 days!). What you’re about to read is from a project that is highly work in progress! It is by no means anywhere near alpha quality. Also the design is not final by any means. The stuff you see in screenshots below is the intended direction, but even here there is a lot still missing.
So far for the little disclaimer.Accretion, the name
Accretion has a meaning. In astrophysics the meaning is:
The first and most common is the growth of a massive object by gravitationally attracting more matter, typically gaseous matter in an accretion disk. Accretion disks are common around smaller stars or stellar remnants in a close binary or black holes in the centers of spiral galaxies. Some dynamics in the disk are necessary to allow orbiting gas to lose angular momentum and fall onto the central massive object. Occasionally, this can result in stellar surface fusion. (See: Bondi accretion)
The second process is somewhat analogous to the one in atmospheric science. In the nebular theory, accretion refers to the collision and sticking of cooled microscopic dust and ice particles electrostatically, in protoplanetary disks and gas giant protoplanet systems, eventually leading to planetesimals which gravitationally accrete more small particles and other planetesimals.
Previously the same project was going be the name: “Porpoise”. That was a small hint at Dolphin since a “Porpose” is a different kind of “Dolphin”. However, that name didn’t really sound right. It looks a lot like “Purpose” and just didn’t work out.Screenshots!
First is the current default view. The red like row is what you see when you hover any part of a row. This is obviously way different then a default icon view mode that you’re used to. There are a couple of reasons for that which you will find below in the “View plugins” section.
This is where things get – technically – very complicated. What you see below is the same view as above only grouped by mime type. Technically each individual group is a QSortFilterProxyModel and that is what made it very complicated. To give you an QML idea, this is a ListView inside a ListView. For more details you’d have to look at the code. Anyway, because every group is a model on it’s own it adds the quite big benefit that you can – if you want – create completely different layouts per group view. So for instance one part can be a detailed list view like you see in the screenshots. Other groups can very well be an icon view or something completely different. You have complete freedom here. If it’s possible in QML then a group can make use of it. Or any view for that matter.
As i just said, each group can have it’s own layout. But you can also sort each individual group the way you want. I actually implemented this feature because i missed it in Dolphin. What you see below is one group (the one with 6 items) is not sorted. The group below with 10 items is sorted in ascending order based on filesize. But you can click any column name and sort by name or time as well. That is another advantage of using QSortFilterProxyModel, it comes with sorting capabilities for nearly all possible data types that you could use. One note though for natural name sorting. Qt 5.2 includes QCollation which allows for fairly simple natural sorting. That is sadly not taken into account in QSortFilterProxyModel so i have to add that in a subclass. Right now it’s not in yet.
That’s it for the screenshots.Model/View abstraction
I try to follow the above principle for every single thing i make. If it doesn’t have a direct need in a class/component it shouldn’t be there.View plugins
Dolphin is actually what inspired me to make this. The way i see it, QML is the (long term) future for the Linux desktop or certainly for KDE when it comes to graphical user interfaces. Dolphin is currently too big to rewrite or to adjust to make a QML frontend. The fact that dolphin has it’s own model/view implementation also makes it near impossible to port dolphin to QML at all.
Before anyone starts to get any ideas for this as a dolphin replacement. That won’t happen or at least not anytime soon. There is a ton of stuff to add to this before it even becomes usable.Future development
I started this project about a year ago. Not the GUI side, but the C++ side. It’s far from done and needs a lot of time to get where i consider it “working just fine”. Right now it even lacks basic functionality like:
- Right mouse button
- Settings (even though the icon is there)
- Click to open files. Folders work.
- tons more
Besides the obvious i also want to implement some more exotic features that should make it very easy for others to develop plugins for it. Perhaps even to build a community around it.
One of those more exotic features is this. Views themselves are plugins, but i also want to have “View entry” plugins. By that i mean plugins that can modify how an entry looks based on it’s data. This should allow for plugins like svn and git. But it should also be flexible enough to – for instance – transform Accretion to an image viewer for image files with the right plugins installed. How that should be done in a technical abstract way, i don’t know.Intended platforms
Right now: Linux with KDE is the intended target. In the longer run it should work on Mac and Windows as well. Don’t pin me on it though.Source
If you want to try this out for yourself then you need to follow a few requirements.
- Qt 5.2
Once you have those 3, you should get the Accretion source from here. I am not going to support or help you getting this up and running. Much of the above changes on a daily basis anyway.
That’s it for my lengthy post. I hope you like this project, i certainly do!
Next blog: KDirchainRebuild.
Last week I attended the MeetingC++ conference with several colleagues. KDAB was a gold sponsor of the conference in Düsseldorf, and I delivered a talk about ‘Modern CMake‘ with Qt and Boost. My slides are available here and the sequence of them is reflected in this post.
As there will be no video recordings posted of the talk, the next best thing is turning my presentation notes into a blog post.
I’ve worked for a while on CMake, and over the last few years and releases I have completely transformed how it works with regard to dependencies over the last few years and releases. The features I’m showing in here have mostly arrived in the last two CMake releases, but some of the features will arrive only in the next release or another future release.
My contribution to Boost so far has only been related to some minor cleanups and dead code removal. I’ve also been working together with Daniel Pfeifer on porting the Boost libraries themselves to CMake. This is an ongoing initiative in Boost, as part of a general modernization of the tools used to create Boost.
I met several people last year at this conference, who didn’t see any need for buildsystems at all, beyond a hand-written Makefile. I want to just mention some of the reasons that people write tools like CMake.
One reason buildsystem tools exist is for finding dependencies. You might have several locations where dependency headers and libraries need to be found. Even if you can hardcode all of the locations into your own Makefile, the result won’t be distributable. It also won’t be portable because Makefile buildsystems are not common on Windows for example. The dependencies you use might also have specific needs for required compiler flags, and compiler flags to use might vary with the compiler, and even the compiler version. This is the case for Qt for example, and the requirement that it has to use position independent code.Generic CMake features
CMake is a generic solution for all of those problems. It is a cross-platform system, with powerful APIs for finding dependencies of various or specific versions, and with many abstractions for platforms, compilers, buildsystems and dependencies.
The modern way to find the Qt5Widgets library with CMake looks like this:
find_package(Qt5Widgets 5.2 REQUIRED)
The find_package command has a lot of knowledge built-in for where to look to find Qt. I tell CMake that I want to create an executable called myapp, and that myapp requires the Qt5::Widgets library. That’s all there is to it for finding and using Qt libraries. Much of the rest of this article is about implementation details of how the above works in the CMake code and how the same principles may be applied to any library usable with CMake. My previous article contains more information for new users of CMake with Qt.
Successful linking to the libQt5Widgets.so binary requires first compiling myapp with the required compilation flags.
Successful compilation requires specifying the correct include directories for finding the Qt headers. Qt headers are installed into an include/ directory, with the actual files divided into directories corresponding to the Qt module which provides them. Users of Qt (and dependencies of myapp) may use include directives such as <QtModule/QClass>, or simply <QClass>, so that means that both the include/ directory and the include/QtModule directory.
Successful compilation also requires specifying the correct definitions on the command line. Qt expects that compilations which use the Qt5Gui library use the -DQT_GUI_LIB define, and the other Qt libraries have similar expectations. Not adding these definitions in a consistent and minimal way can lead to problems such as the QTestlib problem I described in my Qt Developer Days talk last year. To be brief, the headers of the Qt unit test library behave differently depending on whether QT_WIDGETS_LIB, QT_GUI_LIB or QT_CORE_LIB is defined, resulting in tests using either a QApplication, QGuiApplication, or QCoreApplication respectively. Of course, if defined incorrectly, this can mean that tests which should only link to Qt5Core can be created to use a QApplication, and therefore require linking to the Qt5Widgets library. The buildsystem is responsible for getting these things correct.
One of the nice (and recent) features of CMake (in master branch, to become CMake 3.0.0) is that it gives diagnostics if I try to use a dependency without first finding it. CMake now recognises a pattern of double-colons ‘::’ in the name of a dependency as denoting a special meaning that it is a IMPORTED target which encodes a lot of information about how to use it.
All other dependencies can work the same way, if the project providing the dependency also provides CMake files for depender-use. These features are not specific to Qt. Such IMPORTED targets are provided for Qt 4 and Qt 5 already, and will be provided by Boost in the future. CMake 2.8.12 also ships IMPORTED targets for Gtk2, contributed by Daniele E. Domenichelli of KDE fame.
CMake is aware that using the Qt5::Widgets library involves a compilation step and a linking step. It knows that because Qt5::Widgets is a target defined in files shipped by Qt in the lib/cmake directory. These files tell CMake that Qt5::Widgets is a SHARED library, and that it has been IMPORTED for use from upstream. The files tell CMake where to find the library binary. Anything using Qt5::Widgets will link to that.
The files also tell CMake where the header files it requires are defined. Anything using Qt5::Widgets will automatically use those include directories to find the headers (since CMake 2.8.11 and Qt 5.1).
The code to do that for the Qt5Widgets library looks something like this:
and for the Qt5Core library looks something like this:
The Qt5::Core target specifies the ‘top level include’, and the Qt5::Widgets target depends transitively on the Qt5::Core target. That means that this essential information does not have to be repeated, but will be automatically consumed by CMake through the total dependency tree.
The INTERFACE_INCLUDE_DIRECTORIES property is a special target property built into CMake. The INTERFACE_ prefix is a convention used to specify information which is consumed by users of the target. In this case, the Qt5::Core target tells users of it the specific INCLUDE_DIRECTORIES required for successful compilation.
The IMPORTED targets provided by Qt also tell CMake what defines should be defined when compiling. Anything
using Qt5::Widgets will automatically use those command line defines when
compiling (since CMake 2.8.11 and Qt 5.1).
There is a IMPORTED target for all of the libraries in Qt. Some of them take advantage of the condition system of defining the interface, like Qt5::Core. The information encoded here says that if the user is doing a Debug build, then define QT_DEBUG on the command line when compiling (since CMake 2.8.11). Similar logic defines QT_NO_DEBUG in the opposite case.
This feature is not limited to just includes and defines, but can apply to many other ‘usage requirements’. Some individual compiler features are also abstracted by CMake. For example, Qt requires that any user of Qt enables position-independent-code. It reports an error if the relevant flag is not used.
That usage-requirement is built into the Qt5::Core target. It would be possible to populate the INTERFACE_COMPILE_OPTIONS of the target to attempt to encode the required compiler flag for each compiler. However, the required compiler flags vary a lot between different compilers and even vary depending on whether an executable or a library is being created. CMake provides a feature specifically for specifying the ‘position independent code’ usage-requirement. CMake ensures that any user of Qt5::Core will automatically use the -fPIC or equivalent flag for compilers that need it (MSVC does not, for example), since CMake 2.8.11.Qt-aware CMake features
All of the above are generic features of CMake which Qt interfaces with, just as any other library can. However, CMake also has some awareness of Qt built-in. All of these special features are available when using both Qt 4 and Qt 5.
Anyone who has used Qt for more than an afternoon will know that the moc tool is needed for code generation when using Qt. Many features of Qt are built on that code generation, and the user of Qt is required to run moc. There are also code generators for user interface description files, and for virtual resource description files.
CMake is aware of these features file types and code generators, and can enable special handling of them. If you enable CMAKE_AUTOMOC CMake will scan compiled files for the Q_OBJECT macro and automatically run the moc tool as needed (since CMake 2.8.6).
In CMake master branch I have extended this feature to cover the rcc and uic tools. If you set CMAKE_AUTORCC to on, you can list Qt resource files in the sources of an target, and CMake will automatically run the rcc generator tool when needed. If you enable CMAKE_AUTOUIC, CMake will scan source files for ‘ui includes’, and automatically run the uic tool to generate them as needed.Interface library targets
CMake master branch now supports a new type of library called an INTERFACE library. This type of library is designed to provide only INTERFACE_ properties – there is no binary to build or to link to. A consequence of that design intention is that INTERFACE libraries are suitable for header-only libraries, such as those typically provided by Boost, or Eigen, which already uses CMake.
When I showed similar code for Qt5::Widgets before, that was a shared library, so the LOCATION of the library file needed to be encoded. This INTERFACE library does not relate to any binary file, but it only describes an interface which CMake consumes when the library is used. That interface can refer to include directories, compile definitions, or any other compile-related usage requirement.
All of these features are ‘transitive’, which means that the information about usage requirements can be carried through an arbitrary dependency graph. I implemented these features with the dependency graph of Boost as a specifically supported usecase which must be performant. I can confirm that I had http://xkcd.com/276/ in mind when writing the commit message .
There are other features which are specifically inspired by boost use-cases. For example, at KDAB we have an embedded domain-specific library for composing SQL queries in compiled, templated C++ code. It works in a simliar to other libraries, but with the difference that SQLate creates results in the form of QSqlResult, instead of a query string. It uses the boost::mpl, and it requires a large number to be defined as the BOOST_MPL_LIMIT_VECTOR_SIZE.
If we imagine that we have a second library which also uses the MPL, and which also has a requirement for the BOOST_MPL_LIMIT_VECTOR_SIZE, but specifically a different limit to the first library.
If I use both libraries together, I require the larger of the numbers to be used during actual compilations. CMake has a way to specify that the maximum number should be used (in CMake master), and it calculates what value needs to be passed on the command line. The code for enabling CMake to calculate the correct number involves populating the COMPATIBLE_INTERFACE_NUMBER_MAX property.
CMake also has a way to specify that the minimum value of a number should be calculated. For example, Qt has a way to specify the level of use of deprecated APIs. If one dependency requires interface(header) usage of Qt 5.1 deprecated features, and another one requires interface usage of Qt 5.2 deprecated features then the user needs to compile specifying the Qt 5.1 version. CMake calculates that internally and automatically. The code for enabling CMake to calculate the correct number involves populating the COMPATIBLE_INTERFACE_NUMBER_MIN property.
In the above cases, we specified that a compatible maximum or minimum number must be calculated from the interface of the used targets. The same principle is used to issue a diagnostic in the case of an attempt to link to both Qt 4 and Qt 5 in the same executable. In this case, the COMPATIBLE_INTERFACE_STRING property is populated with QT_MAJOR_VERSION. That causes the INTERFACE_QT_MAJOR_VERSION to be evaluated from the Qt5::Core and Qt4::QtCore IMPORTED targets by transitively following the dependencies. The string values must then contain the same value to be compatible, or an error diagnostic is issued.Compile feature specification
People often ask whether CMake ‘supports’ C++11. That is the wrong question to ask. What people are thinking is ‘Can it automatically add the -std=c++11 flag for me?’
Hmm, or should I use -std=c++0x for this compiler?
Or wait, is this a C++14 feature? Maybe I need -std=c++1y?
Oh, wait I’m using MSVC, no flag is needed at all.
The right questions to ask are ‘Does the compiler have the feature I need?’ and ‘Is any flag required to enable that feature?’. The version of the C++ standard that specifies the feature is then not relevant to the user and can be encoded in the implementation of CMake. Aiming for ‘C++11 support in CMake’ would not be future-proof or even past-proof.
Because the standard version which introduced the feature is irrelevant, the user does not need to care whether -std=c++11 or -std=c++98 is needed. By not requiring the user to enable the flags manually, a cross-platform trap can be avoided.
Instead, a future CMake version will allow the user to specify features needed from the compiler. CMake will know which compiler has what feature, so it can issue a good diagnostic if, for example, I specify that I need member templates and then someone tries to use MSVC6 to compile my code. CMake will tell them why it can’t work because MSVC6 does not support that feature. This shows how the CMake API is past-proof and future proof. Thinking in terms of features, not standards, is the right way to go as it is future-proof for C++14 features like generic lambdas, and is also extendible on the axis of compiler extensions.
Because CMake has all the information about what features are supported which which compiler, it can also generate a header file for optional use of features when building your code, and when compiling against its headers.
For example, it will generate a define for each of the features and whether the feature is supported by the users compiler. This is essentially the same kind of thing that the Boost.Config library and qcompilerdetection.h are doing.
We can also define aliases in such a header. Before MSVC supported the final keyword, it supported the sealed keyword in the same position as final. Clang even treats sealed as an alias for final in its MSVC mode. Because we’re generating a header, we can also create portable macros wrapping static_assert etc.Conclusion
CMake is growing increasingly useful to users of Qt as a buildsystem tool. It is adopting features such as these ‘usage requirements’, which are already familiar to users of Boost.Jam and qmake. The CMake files distributed with Qt and maintained by KDAB are pushing and pioneering the way CMake is used in modern, portable real-world complexity projects.
Today I added a new version number to our bugtracker: 4.90.1. This is the version number currently used by KWin on Qt 5 and this means that I consider KWin/5 to have reached a quality level where I think it makes sense to start reporting bugs.
On my system KWin/5 has become the window manager I use for managing the windows needed for developing said window manager. The stability is already really good and today I fixed one last annoying crash when restarting KWin. So I’m already quite happy. Also the functionality looks good. Some of the problems I had seen and been annoyed by are fixed and this means that my normal workflow is working. But KWin supports more than the “Martin workflow” and this means you have to test it and report bugs! Grab the latest daily build packages for your distro and enjoy the power of a Qt 5 based window manager.
Hi guys. Just to keep it clear: since the release of the new Plasma NM applet (version 0.9.3) the networkmanagement repository is mostly deprecated. Only NM/0.9 branch has any use (for now). That branch holds the old stable Plasma NM 0.9.0.x version. The one that I release from time to time since October 2011.
Unless you have a patch to fix one of bugs in the old Plasma NM 0.9.0.x you should use plasma-nm repository instead of networkmanagement. There is even a frameworks branch in plasma-nm, so it already works with frameworks5, which networkmanagement does not.
The Krita team is working really hard on the next release -- Krita 2.8, expected to be released end of December, early January. And it's shaping up to be a memorable release! There is a host of interesting features, and many, many bug fixes as well. Let's take a look at what's coming!Tablet Support
Krita has relied on Qt's graphics tablet support since Krita 2.0. We consciously dropped our own X11-level code in favour of the cross-platform API that Qt offered. And apart from the lack of support for non-Wacom tablets, this was mostly enough on X11. On Windows, the story was different, and we were confronted by problems with offsets, bad performance, no support for tablets with built-in digitizers like the Lenovo Helix.
So, with leaden shoes, we decided to dive in, and do our own tablet support. This was mostly done by Dmitry Kazakov during a week-long visit to Deventer, sponsored by the Krita Foundation. We now have our own code on X11 and Windows. Drawing is much, much smoother because we can process much more information and issues with offsets are gone.OpenGL and Shaders
Krita was one of the first painting applications to support OpenGL to render the image. And while OpenGL gave us awesome performance when rotating, panning or zooming, rendering quality was lacking a bit.
That's because by default, OpenGL scales using some fast, but inaccurate algorithms. Basically, the user had the choice between grainy and blurry rendering.
Again, as part of his sponsored work by the Krita Foundation, Dmitry took the lead and implemented a high-quality scaling algorithm on top of the modern, shader-based architecture Boudewijn had originally implemented.
The result? Even at small zoom levels, the high-quality scaling option gives beautiful and fast results!
(Image by Timothee Giet -- view separately to see it properly)
We hit a snag here, though: on Windows, Krita didn't render anymore on Nvidia graphics cards with the latest drivers. But our awesome community of users did a whip round and pretty soon the Krita Foundation was presented with enough money to get a new Nvidia card -- and within half an hour of getting the card installed, the issue was fixed!G'Mic
Krita 2.8 will have initial support for the G'Mic set of very nearly magic image processing algorithms, out of the box. Developed by Lukáš Tvrdý, this new plugin makes it really easy to, for instance, color line-art, a feature in huge demand by artists.Windows
We have been making Krita builds for Windows for about a year now. Since the original OpenGL refactoring in May, Krita has supported OpenGL on Windows as well. While still not perfect, our builds are improving enough that Krita 2.8 will be the first stable Krita release for Windows.Clones Array and Wraparound Drawing Mode
We've featured these before. The Clones Array tool is especially handy for artists working on tile-based games, while wraparound drawing mode is awesome for making textures that need to be tiled seamlessly.And more...
Sascha Suelzer has improved on resource tagging system. Camilla Boemann has been improving the PSD import and export filter and the crop tool. Michael Martini then started polishing the crop tool even further. Dmitry rewrote the code that mirrors a layer (not to be confused with the canvas mirroring mode...) There are a huge number of bug fixes, the default brush presets have been carefully tuned and have really nice new icons, Sahil Nagpal fixed a bunch of filters for his Google Summer of Code project, masks are now grayscale-based, making it easier to paint on them -- and much, much more. I am already dreading writing the release announcement!Give it a spin
While much of the work on Krita is done by enthousiastic volunteers, the Krita Foundation is currently sponsoring Dmitry Kazakov to work full-time on Krita. Please help the Krita Foundation by subscribing to the development fund!Krita Development Funding Bronze : €5,00 EUR - monthly Sliver : €10,00 EUR - monthly Gold : €25,00 EUR - monthly Platinum : €100,00 EUR - monthly Diamond : €250,00 EUR - monthly
And if you are a professional Krita user, either individually or in a VFX studio setting, remember that KO GmbH offers commercial support for Krita, too. Check out the Krita Studio website for more information!
The KDE PIM sprint in Brno, Czech republic starts this Friday, but some KDE developers just could not wait and decided to come to Brno already on Monday to work with the Red Hat KDE Team. Some of the stuff we are hacking right now is PIM related, but we also want to use this few days to work on other projects that we are involved in, but that are not strictly related to KDE PIM.
So I’m just sitting right now in the office with Àlex Fiestas, David Edmundson, Vishesh Handa, Martin Klapetek and my colleagues Jan Grulich and Lukáš Tinkl. I’m waiting for Àlex to finish polishing his port of BlueDevil to BlueZ5, so that we can start hacking on KScreen – there are far too many bugs that need our attention and we’ve been neglecting KScreen quite a lot in the past few months. We want to fix the annoying crash in our KDED module, solve a regression that my bold attempt to fix an another crash in KDED caused and discuss the future direction of KScreen – me and Àlex have different opinions on where we should go next with KScreen so this is a great opportunity to find a common path
Vishesh has been relentlessly working on improving the semantic technologies in KDE and from what I’ve seen, it’s something to really look forward to
Yesterday, me and Vishesh discussed the possibility of using Akonadi for handling tags of PIM data (emails, contacts, events, … and I implemented the feature into Akonadi and the Akonadi client libraries – only as a proof of concept though, I have no intention of shipping it at this point – much more work and discussion is needed about this. I also made further progress with implementing the IDLE extension to the Akonadi protocol. It allows the Akonadi server to send notifications about changes to clients using the Akonadi protocol, instead of D-Bus (performance++)
David Edmundson and Martin Klapetek have been working on creating Plasma theme for SDDM (a new display manager that for example Fedora intends to ship instead of KDM), and today they’ve been improving KPeople, the meta-contact library used by KDE Telepathy and that they will also integrate with KDE PIM.
My colleagues Lukáš Tinkl and Jan Grulich are working on plasma-nm, the new Plasma applet for network management in KDE.
More people will arrive to Brno tomorrow and the rest of KDE PIM sprint attendees will arrive during Friday, when the real sprint begins. Stay tuned for more news (not just) from the PIM world
I deployed a new version of KDE Pastebin today – which should be available at paste.kde.org. Here are a few things to note:
- Even though paste.kde.org will redirect you to pastebin.kde.org, the old API is still available at paste.kde.org
- The new API will be available at pastebin.kde.org. Please refer to the API specs here: http://sayakb.github.io/sticky-notes/pages/api/
- Please update your client to use the new API at pastebin.kde.org. If the users that use your client switch to the latest version of your client app, they will end up using pastebin.kde.org behind the scenes. The users who haven’t updated to the latest client app yet will continue using the old API.
- The old API will be deprecated on Jan 31 2014.
- After Jan 31 2014, paste.kde.org will also have the new API endpoints. However, pastebin.kde.org will also exist indefinitely as an alias to paste.kde.org – so you can continue using that URL.
It has been quite a while since my last post. My time available for KDE development has dried up since then considerably, limiting the scope of my work to basic maintenance and user support. Time for actual programming, let alone writing about it, was rather scarce.
Luckily this state has largely come to an end. And on top of that I was invited to attend this year’s KDE Edu sprint in A Coruña. The sprint was, as usual, highly productive and motivating. Many thanks go to the KDE e.V. and GPUL for sponsoring the event, to KDE España for covering my traveling expenses and especially to José Millán Soto for organizing the event.
During the sprint, many discussions were held and plans were made, and there was, of course, a decent amount of time to for some serious hacking. For the most part I worked on KTouch, and the time was well spend. I was able complete my main goal for the next version of KTouch, 2.2 to be released with KDE SC 4.12.
So let’s see what 4.12 will bring for KTouch.Custom Lesson Support
The probably most missed feature from old version 1.x days returns to KTouch, only in much improved and extended fashion: an easy way to train on arbitrary text.
Technically spoken, this was possible since 2.0, because users could create their own courses and train on them. But the process is still cumbersome and too complex for the task at hand.1 In the old days one could just open a text file and get going. A feature like that is clearly missing.
This solved for the next version by introducing a new special course, the Custom Lessons course, always available right next to the normal built-in courses.
This new special course mostly acts like any other, training and statistics gathering works as usual. Clicking on New Custom Lesson brings up a stripped-down version of the normal lesson editor.
This has the nice side effect that the usual quality checks are also performed for custom lessons. The characters of the lesson text are matched against the current keyboard layout, so the user has a realistic chance to spot special or foreign characters he can’t type before hitting them during training.2
During my work on the custom lesson editor I took also some time to improve the actual lesson text editing experience. Now the editor has a toolbar allowing to load text files and to reformat lesson texts with long lines so they use the recommended line length.
The custom lessons are stored per user profile and keyboard layout, so one can train on different lessons depending on the specific situation.
One nice side effect of the re-introduction of this feature that KTouch is now finally at least basically usable under every keyboard layout, even for those it offers no courses for.New Courses
Thanks to the to a some generous contributions KTouch will ship a few new courses:
- A new Farsi course including the corresponding keyboard layout data. Contributed by Seyed Ali Akbar Najafian.
- For the Spanish keyboad layout a new couse in Basque. Contributed by Alexander Gabilondo.
- And a course for the rather exotic Workman layout. Contributed by Peter Feigl.
Many thanks go to the respective authors!
To do so, one has (1) discover the editor, (2) create a new course, (3) set the keyboard layout for this course to his own and, (4) finally fill it with one or more lessons. Step (3) will scare off any not-so-tech-savvy user, since it requires technical knowledge and failing to do so will render the course inaccessible in the trainer. Also courses are constructed around the assumption that lessons build up on each other. This doesn’t hold true for a collection of random training texts. ↩
This feature depends on the presence of keyboard layout data also used by the keyboard visualization. If missing the checks won’t be performed and all characters will be valid. ↩
Recently, there was a dot story about Frameworks 5: Started in spring of 2011, the KDE software stack is undergoing a heavy split. The idea is to modularize the KDE libraries into lots of rather small units. Each unit has well-defined dependencies, depending on whether it’s in the tier 1, tier 2, or tier 3 layer, and depending on whether it provides plain functionality, integration, or a solution. If you haven’t yet, please read the article on the dot for a better understanding.
With this modularization, the question arises about what will happen to the KTextEditor interfaces and its implementation Kate Part – and of course all the applications using Kate Part through the KTextEditor interfaces, namely Kate, KWrite, KDevelop, Kile, RKWard, and all the others… The purpose of this blog is to give an answer to these questions, and to start a discussion to get feedback.A Bit of History (Funny Read Here)
Since not everyone is familiar with “Kate & friends” and its sources, let’s have a quick look at its software architecture:
From this figure, we can see that the software stack has 3 layers: the application layer, the interfaces layer and the backend layer. KTextEditor provides an interface to all functions of an advanced editor component: Documents, Views, etc. (see also the API documentation). However, KTextEditor itself does not implement the functionality. So it’s just a thin layer that is guaranteed to stay compatible for a long time (for the KDE 4 line, KTextEditor is compatible since 2007, so as of now 7 years). This compatibility, or better yet interface continuity is required, since applications in the Application layer use the KTextEditor interfaces to embed a text editor component. The implementation of the text editor itself is completely hidden in Kate Part. Think of a backend library implementing all the KTextEditor interfaces. When an application queries KDE for a KTextEditor component, KDE loads a Kate Part behind the scenes and returns a pointer to the respective KTextEditor classes. This has the advantage, that Kate Part itself can be change without breaking any applications, as long as the interfaces stay the same. Now, with KDE Frameworks 5, this principle will not change. However, interfaces will undergo a huge cleanup, as will be explained now. As a consequence, all nodes that point to or from the KTextEditor node, namely Kate Part on the backend layer as well as applications, will need to adapt to these interfaces.Milestone 1: KTextEditor and Kate Part on 5
KTextEditor will be a separate unit in the frameworks split. Therefore, the KTextEditor interfaces will not come bundled with one monolithic ‘kdelibs’ as it was the case for that last 10 years. Instead, the KTextEditor interfaces are developed and provided in a separate git repository. This is already now the case: The KTextEditor interfaces exist as a copy in Kate’s git repository, and relevant changes were merged into kdelibs/interfaces/ktexteditor in the KDE 4.x line. For “KTextEditor on 5,” the first milestone will be to get KTextEditor compile with the libraries and tools from the frameworks 5 branch. Along with this port, the KTextEditor interfaces have a lot of places that are annotated with “KDE 5 todos.” That is, the KTextEditor interfaces will undergo a huge cleanup, providing an even better API for developers than before.
Currently, the KTextEditor and therewith also its implementation Kate Part use the KParts component model. The KParts model allows to easily embed Kate Part in other applications along with Kate Part’s actions and menus. Further, Kate Part internally uses KIO to load and save files to support network transparent text editing. KParts itself and KIO are both Tier 3 solutions. This implies that KTextEditor along with its implementation Kate Part are a Tier 3 solution.
In other words, a straight port and cleanup of KTextEditor and Kate Part will depend on a lot of high level frameworks. This solution will provide all the features the KTextEditor interfaces provides right now in the KDE SC 4.x line.
Currently, we plan one major change in the KTextEditor on 5: We will remove KTextEditor plugins. Over the last 10 years, we got close to no contributions to the KTextEditor plugins. Existing KTextEditor plugins partly clash with code in Kate Part (for instance the Auto Brackets with the Autobrace plugin), and merging the plugin’s xml gui into the KTextEditor::Views always requires some hacks to avoid flickering and make it work correctly. Besides, if the KTextEditor plugins are removed, for instance the Kate config dialog only shows one “Plugins” item instead of two. This is much cleaner to the user. Existing functionality, like for instance the “Highlight Selected Text” plugin, will be included into Kate Part directly. The same holds true for the HTML export feature. This is a bold change. So if you want to discuss this, please write to our mailing list email@example.com.
The time frame for the KTextEditor port & cleanup is rather short: We want to provide rather stable KTextEditor interfaces so that other applications can rely on it. Therefore, we will probably create a frameworks branch in the Kate git repository in December (current proposal on kwrite-devel). Binary and source incompatible changes will be allowed until other applications like KDevelop or Kile are ported to Frameworks 5. Then, the KTextEditor interfaces will again stay binary compatible for years.Milestone 2: KWrite and Kate on 5
KWrite is just a thin wrapper around the KTextEditor interfaces and therewith Kate Part. Therefore, KWrite will mostly support just the same functionality as it provides now. The same holds true for Kate. However, Kate itself provides quite a lot of advanced features, for instance to have multiple main windows (View > New Window), or sessions, and a good plugin infrastructure. Of course, Kate itself will also undergo cleanups: i) cleanups due to changes in the KTextEditor interfaces, and ii) cleanups like for instance moving the Projects plugin Kate itself, making it more easily accessible to other plugins like the Search & Replace or Build plugin. We will also remove support for multiple main windows through “View > New Window.” This is due to the fact, that many Kate plugin developers were not aware of this feature, and therefore completely messing up their code by not separating the logic from the view, resulting in crashes or broken behavior when using multiple main windows. Removing the support for multiple main windows, we will loose this feature. However, we get simpler and more maintainable code.
There are other small details that will change. For instance, as it looks right now, the Python pate host plugin in Kate on 5 will only support Python 3 (current discussion on kwrite-devel). Python developers, you are welcome to contribute here, as always!Milestone 3: More Modularization in the KTextEditor Interfaces?
Milestone 1 & milestone 2 will happen rather sooner than later (fixed dates will follow once we’re sure we can satisfy them). Since the transition to Frameworks 5 allows us to change KTextEditor interfaces, it is the right time to think how we can improve the KTextEditor interfaces and its implementation Kate Part even further. For instance, on the mailing list, the idea was raised to make the KParts model optional. This could be achieved for instance by deriving KTextEditor::Document from QObject directly, and create a thin KParts wrapper, say KTextEditor::DocumentPart that wraps KTextEditor::Document. This would be a major change, though, and possibly require a lot of changes in applications using the KTextEditor interfaces. As of now, it is unclear whether such a solution is feasible.
Another idea was raised at this year’s Akademy in Bilbao: Split Kate Part’s highlighting into a separate library. This way, other applications could use the Kate Part’s highlighting system. Think of a command line tool to create highlighted html pages, or a syntax highlighter for QTextEdits. The highlighting engine right now is mostly internal to Kate Part, so such a split could happen also later after the initial release of KTextEditor on 5.Join Us!
The Kate text editor only exists thanks to all its contributors. Moving to frameworks, it is the perfect time to follow and contribute to the development of Kate. In fact, you can learn a lot (!) in contributing. In case you are interested, have ideas or want to discuss with us, please join our mailing list firstname.lastname@example.org.
I just announced KDevelop 4.6 Beta 1 on the KDevelop website. Go read the announcement and test the hell out of this release :) I’m pretty confident that its already a very solid release though!
Cheers, happy hacking!
With the release of Android 4.4 called KitKat Google made some interesting changes to their ActiveSync implementation: the code is now set up to sync more than one calender, and the first KitKat user already confirmed that new feature.
In February I described in a blogpost why Android cannot sync multiple calendars via ActiveSync. The problem was that Google did not implement the necessary parts of the ActiveSync specification in Android.
However, that seems to have changed: if you look at the current ActiveSync implementation of Android 4.4 KitKat, the source code (tag 4.4rc1) does list support for multiple calendars – and also for multiple address books:MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_USER_CALENDAR, Mailbox.TYPE_CALENDAR); MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_USER_CONTACTS, Mailbox.TYPE_CONTACTS);
I had no chance yet to test that on my own, but there are reports that it is indeed working:
Today i flashed a Android 4.4 Rom on my smartphone. After adding the Exchange Profile all my Calendars are there [...]
I’ve uploaded a screenshot here:
Looks like Google actually listened to…erm, corporate users? At least to someone, though
But: Since I have no first-hand-experience in this regard I would like to ask all of my nine readers out there if anyone has a stock KitKat running and if the could check this feature. Please test this and leave a report about your experiences in the comments. I will include it in the article.
By the way, the above mentioned source code snippet also tells quite exactly which other ActiveSync functions are not yet supported in Android://MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_TASKS, Mailbox.TYPE_TASKS); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_NOTES, Mailbox.TYPE_NONE); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_JOURNAL, Mailbox.TYPE_NONE); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_USER_TASKS, Mailbox.TYPE_TASKS); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_USER_JOURNAL, Mailbox.TYPE_NONE); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_USER_NOTES, Mailbox.TYPE_NONE); //MAILBOX_TYPE_MAP.put(Eas.MAILBOX_TYPE_UNKNOWN, Mailbox.TYPE_NONE); //MAILBOX_TYPE_MAP.put(MAILBOX_TYPE_RECIPIENT_INFORMATION_CACHE, Mailbox.TYPE_NONE);
I guess syncing tasks could come in handy in corporate environments. Combined with support for multiple task folders you could even design your own Kanban “board” that way.
Nevertheless I’d like to add that ActiveSync is no big deal for me anymore because I am very happy with a – albeit 3rd party and not yet Open Source – CalDav implementation, which can even sync multiple task folders.
Filed under: Android, Business, E-Mail, Google, Linux, Microsoft, Office, Politics, Technology, Thoughts
Yes, openSUSE 13.1 isn't out yet, but people who've installed RC1 and upgraded via the repositories are very close... So let's call that 13.1_almost, for now. And 13.1_almost has a bit of an issue if you are a Skype user. When starting up Skype you might very well be greated with a very loud and unpleasant sound coming out of the speakers.
Solve itThis is fortunately not hard to solve: start skype from a commandline with the following command:
See this blog for some background information. I am hoping we can get an update in to fix this, but for now - if you suffer from it, use this as a work-around.
Make it permanentYou can change the menu item for Skype so you don't have to start it from the command line, as follows:
Left-click the menu button and choose "Edit Applications..."
Then, locate the Skype application and add the "PULSE_LATENCY_MSEC=60" in front of the command.
Save, done. Easy-peasy, yes?
I'm going to be at SUSECon this week, ending in the awesomeness that will be the openSUSE Summit! If you're there, surprise-hug me if you can ;-)
Have a lot of fun!
(and think of the geekos)
I joined SUSE in June 2012, almost 18 months ago. I haven't written much about anything during this time. And it haven't been because I hadn't time, but because I haven't have enough energy. I have received though these past months several requests to write a little about what I do at SUSE, so here we go....
The openSUSE Team is a good mix of long term SUSE employees and fresh blood, youth and experience, openSUSE and other distros background, on site and remote workers, people with management or commercial/customer support experience together with integrators and developers, people coming from R&D or product focus companies together with people with a strong community profile.... a very diverse (1 Taiwanese, 2 Czechs, 1 Dutch, 1 Serbian, 4 Spaniards and 3 Germans) and talented group. We also have trainees in the team. Having students is something I like because it helps any team to develop engagement skills.
The Team has as major focus the openSUSE distribution. It is element around which the whole project circles. It is the key point that sustain everything else in openSUSE. Obviously we put effort in other actions but we try that everything we do is directly related, have its roots, in the distribution, in the software. Obviously we are not the only force in openSUSE, not even the most numerous. There are hundreds (literally) of people that participates in this collective effort.
The team have a big impact since we are dedicated full time to work on the project, we have focused our activity in limited areas and we are fairly well organized. But in terms of effort, the rest of the community has a much bigger weight than my team... fortunately :-).
Those familiar with KDE will understand what I mean if I name Blue Systems work in the project today.
From the community perspective we have focused our action in two major areas:
* The openSUSE Conference.
* openSUSE news portal (marketing).
In 2012, like it happened before, SUSE took the lead in organizing the Conference. This changed in 2013. A group of contributors led by Kostas and Stella, reputed community members, organized it, opening the door for a new model within openSUSE.
SUSE role in the organization changed. Now we support the organizers in different tasks instead of leading the organization. For me, this is a relevant success story that should serve as example for many in the future. I feel very comfortable in this new role because the efficiency of our contribution has increased significantly. Organizing an event FOR a community is different than supporting the community in organizing THEIR event, right?
In marketing my team makes a significant impact by keeping the News portal as a reference point of information about openSUSE. We focus most of our action around the openSUSE Releases. We also link the innovation brought by SUSE into openSUSE with our community. We help SUSE Teams in marketing their work when it makes sense.
These two actions leave us little time for supporting further initiatives in the news portal. We do it once in a while though, not in regular basis. The situation in this regard is not much different than other communities I know. Keeping the main news portal up and healthy requires more people than usually is available. A more collective approach is what we all want. It is not an easy goal to achieve in any case.
So we basically have concentrated our effort in three main areas:
- What we call "the future". You will know more about it soon.
- The openSUSE Development and Release process, that will have openSUSE 13.1 as the main result, coming in a few days (November 19th).
- Community work. Specially around the openSUSE Conference and the news portal.
I hope this overview provides some answers to those of you interested in what I am up to lately. You can follow closely our actions through our Team blog, that has Jos Poortvliet as main editor and the whole team as authors.
This week I am participating together with other colleagues in SUSECon'13 and openSUSE Summit 2013. If you are in the Orlando Area, FL, US, consider coming. You won't regret it.Agustin Benito Bethencourt (Toscalix) KDE eV and KDE Spain member Spanish Blog: http://abenitobethencourt.blogspot.com Linkedin profile: http://es.linkedin.com/in/toscalix
Tomorrow David Edmunson, Vishesh Handa and I will be taking a plane from Barcelona with final destination Brno to meet for a pre-PIM sprint hackathon with the Czech KDE hackers (Lukáš Tinkl , Jan Grulich, Dan Vrátil, Martin Bříza, Martin Klapetek), is in this ocasions where one comes to appreciate that we can do our jobs from anywhere.
We decided to go this early (4 days before the actual sprint starts) because we all have to work on things with somebody close to the Brno RedHat office in a variety of topics:
- Working on PowerDevil for Plasma2 (at least Lukáš and I)
- Moving KScreen forward, fixing bugs etc (Dan and I)
- KDE Startup (Martin K, David, Martin B and I)
- KDE Telepathy (David and Martin K)
- KPeople (Vishesh, Martin K, David)
- Metadata system (Vishesh and Dan)
- Login manager (David and Martin B)
- And more
If you haven’t noticed, all those things have nothing to do with PIM! So it is a perfect time to introduce a new concept in KDE, the pre-sprint hackathons !
Even though I hate being on the road, I can’t wait to arrive to Brno and start this week, I’m sure it is going to be ton of fun and a lot of new things will come out of it.
Apologies for the non KDE post on PlanetKDE. I've recently read so much stuff about Ubuntu's "spying", that I feel it's worth clearing the air.Ubuntu's File Search
This is Ubuntu's file search. It searches files.So does this send data to the internet?
No, not at all. It searches your files.So what's the fuss?
The fuss lies in something else, the Ubuntu Dash. There is a lot of confusion about this.
Ubuntu Lenses are a way of searching multiple sources. If we look at the list of available sources it includes web searches such as
Google Books, Reddit, Wikipedia, Youtube, Amazon as well as combining local sources such as applications, local files and menubars. The idea behind it seems to be to create a single unified search bar, abstracting sources from the user. You can search for a song, and not care if the results are local or remote. Pretty neat.
It's quite hard to combine results from the internet, without using the internet, so your search ends up online. Whilst this is encrypted, the results back are not. This is no worse than a search query with Google or Yahoo or any other search engine, and arguably considerably better as you are not later tracked round the web.Adding Amazon searches by default
In all the search lenses, Amazon is added by default, this gives Canonical money which is fed back into Ubuntu. This is akin to how Mozilla Firefox set the default search provider to Google.
Mozilla earn over $96 million per year for this. KDE Has similar partnerships with enabling DuckDuckGo searches to be manually activated from krunner for a lot lot less.
It's not unheard of in open source communities to make money this way, and whilst I don't think as a user I would like these ads I can't really hold it against them.So how bad is it?
Canonical does not have your file contents, they don't even have a list of your files, nor do they track all key presses.
At best, there is a record of a search term linked to an IP address, which may of may not be part of a file name. It's not a lot of private data, and it's not linked to you as a named individual.
The claim by the EFF, is not about the possibility of Canonical 'spying' on you. The claim is that a hacker sniffing your network traffic could infer from the from the images returned from Amazon what you are searching for.
Personally I consider this a very weak claim, if someone is sniffing your network traffic your are more likely to give away personal information in other ways, such as any browsing. It's the EFF's job to err on the side of extreme caution and to provide information. It's up to us as the wider community to balance this with pragmatism and to keep things within proportion.
Edit: And this potential issue has since been addressed for 13.10, all data back is also encrypted, addressing the main point from the EFF. Thanks to Michael Hall for the updated information.So why is it called spyware by some people?
There is a traditional gap between web and local applications, people ignorant of what the dash search does, mistakenly take this for a simple file search. For a file search to use the web would clearly be wrong. The majority of the complaints and criticisms I have read do not come from Ubuntu users who have seen the Ubuntu bar. To any user of the Ubuntu search bar, it should be obvious that it includes internet results due to the high visibility of the internet results within moments of usage.
If we always try to pander to the notion of treating web and local data as two completely separate distinct entities desktop Linux will always be held behind the web applications that are able to employ much richer content. I don't want to have to be at a point where Firefox has to provide a prompt to explicitly state that it will use a network connection.
Spies (with the exception of James Bond) are also secretive. The Ubuntu dash makes no effort to hide exactly what it is doing. Whilst it may not be the world's greatest or most useful feature, this isn't something that spies.
To call it spyware is a blatant lie, to call it a privacy invasion I think is a massive exaggeration of a rather minor concern that misunderstands the goals of the dash.
A few weeks ago, during SUSE Hack Week 10 and the Berlin Qt Dev Days 2013, I started to look for Qt-based libraries, set myself the goal of creating one place to collect all Qt-based libraries, and made some good progress. We had come up with this idea when a couple of KDE people came together in the Swiss mountains for some intensive hacking, and where the idea of Inqlude, the Qt library archive was born. We were thinking of something like CPAN for Qt back then. Since then there was a little bit of progress here and there, but my goal for the Hack Week was to complete the data to cover all relevant Qt-based libraries out there.
The mission is accomplished so far. Thanks to the help of lots of people who contributed pointers, meta data, feedback, and help, we have a pretty comprehensive list of Qt libraries now. Some nuts and bolts are still missing in the infrastructure, which are required to put everything on the web site, and I'm sure we'll discover some hidden gems of Qt libraries later, but what is there is useful and up to date. If some pieces are not yet, contributions are more than welcome.
Many thanks as well to the people at the Qt Dev Days, who gave me the opportunity to present the project to the awesome audience of the Qt user and developer community.
The first key component of the project is the format for describing a Qt-based library. It's a JSON format, which is quite straightforward. That makes it easy to be handled programmatically by tools and other software, but is also still quite friendly to the human eye and a text editor.
The schema describes the meta data of a library and its releases, like name, description, release date and version, links to documentation and packages, etc. The data for Inqlude is centrally collected in a git repository using this schema, and the tools and the web site make use of it to provide nice and easy access to users.
The second key component is the tooling around the format. The big advantage of having a structured format to describe the data is that it makes it easy to write tools to deal with the data. We have a command line client, which currently is mostly used to validate and process the data, for example for generation of the web site, but is also meant to help users with installing and downloading libraries. It's not meant to replace a native package manager, but integrate with whatever your platform provides. This area needs some more work, though.
In the future it would be nice to have some more tools. I would like to see a graphical client for managing libraries, and integration with IDEs, such as Qt Creator or KDevelop would also be awesome.
The third key component is the web site. This is the central place for users to find and browse libraries, to read about details, and to have all links to what you need to use them in one place.
The web site currently is a simple static site with all its HTML and CSS generated from the meta data by the inqlude command line tool. Contributing data is still quite easy by providing patches to the data in the git repository. With GitHub's web interface you can even do that just using your web browser.
There are a few things worth pointing out explicitly as I got similar questions about these from various people.
The first thing is that Inqlude is meant to be a collection of pointers to releases, web sites, documentation, packages. It's not meant to host the actual code, tar balls, or any other content belonging to the libraries. There are plenty of ways how to do that in a better way, and all the projects out there already have something. Inqlude is just meant to be the central hub, where to find them all.
Another thing which came up from time to time is the question of dependencies. We don't want to implement yet another package management system, or another dependency resolver. So there we rely on integration with the native tools and mechanisms of the platforms, we run on. Still it would be nice to express dependencies in the meta data somehow, so that you have an easy way to judge, what you will need to run a given library. We will need to find a way how to do that in the best way, maybe a tier concept, like KDE Frameworks 5 is using it, would do the trick.
Finally I would like to stress that Inqlude is open to proprietary libraries as well. The infrastructure and the tooling is all free software, but Inqlude is meant as an open project to collect all libraries which are valuable for users on the same terms. The license is part of the meta data, so it's transparent to users, under which terms the library can be used, and this also allows to categorize libraries on the web site according to these terms. There still is a little bit of work missing to do that in a nice way, but that will be done soon. Free software libraries of course do have the advantage, that all information, code, and packages, is directly available, and can be accessed immediately.
There are a couple of short term goals I have for Inqlude, mostly to clean up loose ends from the work which happened during the last couple of weeks:
- Collect and accurately present generic information about libraries, which is not tied to a release. This is particularly relevant for providing a place for libraries, which are under development and haven't seen a formal release yet.
- As said above, the listing of proprietary libraries needs some work to categorize the data according to the license. Then we can display libraries of all licenses nicely on the web site.
- Currently we have one big entry for the Qt base libraries. It would be nice to split this up and list the main modules of Qt separately, so it's easier to get an overview of their functionality, and use them in a modular way.
There also are a number of longer term goals. Some of them include:
- Integration with Qt Designer, so that available libraries can be listed from within the IDE, and being used in your own development without having to deal with external tools, separate downloads or stuff like that.
- Build packages in the Open Build Service, so that ready-to-use binary packages are available for all the major Linux distributions. This possibly could be automated, so that ideally putting up the meta data on Inqlude would be all what it takes to generate native packages for openSUSE, Fedora, Ubuntu, etc.
- Integration with distributions, so that libraries can be installed from inqlude via the native package management systems. This already works for openSUSE provided the meta data is there, but it would be nice to expand this support to other systems as well.
- Upstream the meta data, so that it can be maintained where it's most natural. To keep the data up to date it would be best, if the upstream developers maintain it at the same place and in the same way as they also maintain the library itself. This needs a little bit of thought and tooling to make it convenient and practical, and it's probably something we only want to do when the format has settled down and is stable enough to not change frequently anymore.
There might be more things you would like to see to happen in Inqlude. I'm always happy about feedback, so let me know.
This was and is a fun side project for me. It's amazing what you can achieve with the help of the community and by putting together mostly existing bits and pieces.
Most of you will probably know that as my "day job", I am a student currently pursuing my master's degree in computer science. This, of course, also entails some original research.
In this blog post, I will describe both one of these efforts and a practical use case of Simon's upcoming dictation features, all conveniently rolled up into one project: ReComment.
A recommender system tries to aid users in selecting e.g., the best product, the optimal flight, or, in the case of a dating website, even the ideal partner - all specifically tailored to the users needs. Most of you have probably already used a recommender system at some point: Who hasn't ever clicked on one of the products in the "Customers Who Bought This Item Also Bought..." section on Amazon?
The example from Amazon uses what is conventionally called a "single-shot" approach. Based on the systems information about the user a set of products are suggested. In contrast, "conversational" recommender systems actively interact with the user, thereby refining their understanding of the user's preferences incrementally.
Such conversational recommender systems have been shown to work really well in finding great items, but obviously require more effort from any single user than a single-shot system. Many different interaction methods have been proposed to keep this effort to a minimum while still finding optimal products in reasonable time. However, these two goals (user effort and convergence performance) are often contradictatory as the less information the user provides, the less information is also available for the recommendation strategy.
In our research we intend to slightly sidestep this problem that is traditionally combated with increasingly complex recommendation strategies and instead make it easier for the user to provide complex information to the system: ReComment is a speech-based approach to build a more efficient conversational recommender system.
What this means exactly is probably best explained with a short video demonstration.
(The experiment was conducted in German to find more native speaking testers in Austria; be sure to turn on subtitles!)Implementation
Powering ReComment is Simond with the SPHINX backend using a custom-built, German speech model. The NLP layer uses relatively straight-forward keyword spotting to extract meaning from user feedback.
A pilot study was conducted with 11 users to confirm and extend the choice of recognized keywords and grammar structures. The language model was modified to heavily favor keywords recognized by ReComment during decoding. Recordings of the users from the pilot study were manually annotated and used to adapt the acoustic model to the local dialect.
ReComment itself is built in pure Qt to run on Linux and the Blackberry PlayBook.Results and Further Information
To evaluate the performance of ReComment, we conducted an empirical study with 80 participants, comparing the speech-based interface to a traditional mouse-based interface and found that users not just reported higher overall satisfaction using the speech-based system, but also reported finding better products in significantly less interaction cycles.
The research was published at this years ACM Recommender System conference. You can find the presentation and the full paper as pdf, in the publications section on my homepage.
The code for the developed prototype, including both the speech-based and mouse-based interface has been released as well.