Planet KDE

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 1 day 19 hours ago

Monday Report: Application Design

Mon, 2014-08-04 06:41
For the planet readers: This post is written by Philipp Stefan
Last week we saw the release of our design pattern guidelines, this week we focused on using them to prototype application designs. Some of our work has already been made public, like Andrew’s calendar prototype which saw some amazingly detailed feedback. We really love top notch input like that, keep it coming, people!Andrew has also been working on a design for a music player. We already have a developer interested in helping us to make it a reality. As always, if you have ideas bring them on!In another thread, EraX has released a few more excellent mockups of how he imagines a future muon discoverer to look like. If you have some time consider to give him feedback. The more feedback he gets the better the design proposal will eventually turn out, so don’t shy away from involving yourself :).

A few users also have kickstarted the work on a better tags GUI for Dolphin. Unfortunately we, besides Thomas, didn't have much time to respond to this proposal. We hope to reach out more throughout this week.Additionally the work on redesigning the desktop configuration dialogue has started off. Some rough ideas have been sketched out, but nothing is final yet. We’re still arguing about what has to be included where. Currently it seems that there is no winning proposal in sight. Though, I’m sure this will change in the course of this week.
One project I’m personally very excited about is the work of designing an API that makes *PIM's mail functionality available via QML. This will enable us to be able to write e-mail clients in QML or use KMail functionality in e.g. plasmoids. The developer behind this task wants to make a prototype client to see what’s needed in the API to work best for developers. We come into play by designing the prototype client. However, the VDG hasn't quite finished the mockups yet, so it’s your time to shine. A few users have already responded with feedback and mockups of their own. Currently the discussion is focused on interaction patterns and the general layout of such an e-mail client, so in its very early stages.
Besides these application the VDG is also working with developers (or without) on an image viewer and a video player. Besides that we want to make slight improvements to key areas of Plasma 5 e.g. the system tray. As you can see there’s still much to do, but we’re pleased with the progress made so far.
*I falsely wrote that the API would expose KMail functionality, when it in fact makes KDE PIM's mail functionality accessible via QML, my apologies. 
Categories: FLOSS Project Planets

The votes are in!

Mon, 2014-08-04 04:51

Every backer who pledged 25 euros or more had a chance to vote for their favorite feature -- and the now the votes are in and have been tallied up! Here are the twelve features that Dmitry will be working on for Krita 2.9:

However, we have one backer at the 750 euro level, and he has the override button! His choice is 4: Improved anti-aliasing in the transform tool, which was itself already a popular choice!

These are the twelve features that didn't make the cut because we didn't manage to reach the super-stretch goal and get funding for Sven as well:

But we already implemented a bunch of features from the list: perspective transformation, transforming selections, mask layer view, mask display, improved straight line tool...

And the resource manager is nearly done, and we've had a special request outside the kickstarter for an improved overview docker. Python scripting might make it to 2.9, we've got a prototype running. Multi-image, multi-view, multi-window looks set to go in soonish.

Thanks to the support of nearly 700 people, Krita 2.9 will be a great release!

Categories: FLOSS Project Planets

A Wallpaper Plugin Demo For Plasma 5.

Mon, 2014-08-04 04:07

As part of the core Plasma team I have spent a long time helping in the migration to make everything QtQuick2.0 based, making sure we get the most out of the OpenGL backing.

This weekend I wanted to make some sort of demo which shows the power of this in the form of an interactive wallpaper.


Clicking on the desktop simulates a firework at that location. Frantic clicking will simulate the entirety of November the 5th (or July the 4th for people across the pond). When not active the whole thing only uses as much resources as any regular static wallpaper.

Installation

Download the complete zipfile here.

Install with the command line

plasmapkg2 -t wallpaperplugin -i fireworks.zip

Then under "Desktop Settings" change the wallpaper type to "fireworks".

Why on Earth would I want this nonsense on my desktop?

Realistically this isn't something that would ever go in the default desktop installation. However by getting good at silly initiatives, we build and optimise for useful things; smooth subtle animations and shadows where it's actually useful.

I want to make something like that!

The actual source code is fairly straightforward.
The QML Particle system is very much like the particle system found in Blender (The main similarity being that I don't understand how to use the particle system in Blender) with emitters and affectors.

In this case a circular emitter forms the base of the firework emitting a burst of coloured particles in all directions; a trail emitter then follows each coloured particles emitting a spread of small white particles to create a sparkly trail effect.

With the power of particles, sprites and even embedding other OpenGL framebuffers, I encourage more people to write some interesting interactive wallpapers.

Categories: FLOSS Project Planets

Taking advantage of OpenGL from Plasma

Sun, 2014-08-03 17:17

I’m excited, and I hope you’ll be too.

David Edmundson and I have been working hard the last weeks. It’s not that we don’t usually work hard, but this time I’m really excited about it.

A bit of context: in Plasma an important part of the system drawing is painting frames (others are icons, images and the like). Those are in general the elements that are specified in the Plasma themes. These will be buttons, dialog backgrounds, line edit decorations, etc.

So far, to paint those we were assembling the full image in the CPU and then sending it over to OpenGL as a full texture, then we would do the composing of all the different frames, according to the information provided by QtQuick, through the Qt Scene Graph. There are 2 main problems in the current approach.

  1. We were maintaining huge textures in memory. Every frame was completely stored in memory and gpu memory. Which means that the bigger the dialogs are, the more memory we consume, even though the texture is flat.
  2. Every time we resize the frame, we have to re-assemble the frame in CPU memory and upload it again.
First: The 9-patch approach

First we made it possible to have the frames to be rendered by each different parts and assembled by the GPU. This wasn’t possible, because Plasma themes are quite complex, so now we have 2 different paths. If the theme element can take advantage of the optimization, it will use the new code, otherwise it will stay working beautifully on the former, thorough implementation [1].

Therefore, instead of rendering all the frame now we’ll be uploading 9 textures to the GPU, and let it either tile or stretch depending on the settings in the theme. This way:

  1. we are uploading 9 tiny textures rather than a big texture.
  2. when the frame is resized, we tell the nodes to resize and the GPU does the job [2].
Second: Caching the textures

Now everything was in place, we’d have many times the small 9 elements but we kept uploading them to the GPU over and over. It’s little textures, but it’s still better if we get to re-use what we already have. To do so, we’ve placed a little hash table that keeps track of the already created textures to re-use them. This way, we get to tell the Qt Scene Graph to use a texture that has already been uploaded rather than a new one. We’ve run some tests, here’s the result:

  • In PlasmaShell we get 342 miss and 126 hits, so roughly 25% of bandwidth and memory improvement
  • In KRunner we get 108 miss and 369 hits, so roughly 350% improvement on memory and bandwidth improvement.
Future, further work

Sadly enough, raw memory usage is still quite high, when running plasma shell on massif, we are still reported as most of the memory usage being in the GPU graphics card (or rather i965_dri.so), so we’ll have to dig it [3]. We’ve found some ways to improve this, for example by enforcing OpenGLES 2, but this requires Qt 5.4 which is due October. I implemented it nevertheless, and it works fine.

Being more precise, a big offender is using a wallpaper image. We’ve looked into it, the code looks fine, but then it makes a big difference, so big that I still don’t understand how it can be. A good suggestion if you’re testing Plasma 5 on a system low on memory, is to run it with the plain color wallpaper. We can save up to 30% of memory consumption, no kidding (it actually depends on who you ask, either massif, htop or ksysguard; but they all agree it’s a big deal). We’ve investigated a bit and found ways to improve the situation there, but if you are interested, feel free to join!

Finally, another problem with regards to memory consumption is QML. We make heavy usage of it and it shows memory-wise. We should see if we can adopt any optimization to stream-line our usage, but admittedly it’s much better than one would have expected.

Testing

If you want to give it a try, you can already find most of this in master, and it will be available from the next KDE Frameworks 5.1 release which will be available by the second week of August.

Hope you liked it, it was a great exercise to investigate all this! I learned a lot and gained quite some respect for the Qt Scene Graph and QML development teams. Keep rocking!

[1] More precisely, at the day, when there’s no hint-compose-over-border or mask-*-overlay elements

[2] an exception for it being (hint-stretch-borders and hint-tile-center hints, where we’ll have to re-render on resize it).

[3] David, Vishesh and I we all have Intel drivers, but I guess it’s a good card to test-case on, given how mainstream it is, currently.

Categories: FLOSS Project Planets

NGC6992 with Ekos

Sun, 2014-08-03 15:28
A lot has changed in the last few months in Ekos, KStars advanced astrophotraphy tool. The powerful builtin sequence queue is more robust now and can support in-sequence autofocusing, autoguiding limits with dither support, and autopark functionality. The astrometry.net based alignment module has been improved to support the online astrometry solver using Web Services, thereby eliminating the need for an offline astrometry solver that requires gigabytes of star indexes in order to solve.

Coupled with an ever improving INDI drivers, Ekos provides users with a complete astrophotography stack in Linux. While the majority of Ekos development takes place indoors with the help of INDI's powerful device simulator, nothing beats on-site testing with all the hardware connected and ready to go. So a couple of days ago, I decided to put Ekos to the test!

Since I live in a heavily light polluted area about 30 KM south of Kuwait City, my friend and I decided to conduct the astrophotography session some 100 KM away northwest of Kuwait in AlSalmy desert. It's a quite desolate desert, but it is more rocky than sandish so that would help a lot in case we get some wind our way. After setting up the equipment (Orion Sirius EON 120mm APO, QSI 583 CCD, Lodestar autoguider, and Moonlite focuser), I performed the initial autofocus routine, followed by plate-solving a frame in order to establish the telescope's actual position in the night sky. Before Ekos alignment module, this process would take anywhere between 10-20 minutes to get the scope properly aligned using 2 or 3 star alignments, and even after that, the GOTO might not be accurate. With Ekos alignment module, GOTO is highly accurate and it increases in accuracy with each subsequent frame captured and solved.

Using Ekos Sequence Queue, I added 4 jobs, each consists of 6x300s exposures in each filter (Hydrogen Alpha, Red, Green, and Blue). Then I used the Guide module to calibrate the guider and start the autoguide process after slewing to NGC6992 and engaging tracking.

After the light frames were completed, flats and darks were taken. The flats were captured using an artificial uniform light source. Due to a temperature of 39 degrees Celsius in the desert (around 8:15 PM),  the CCD was only cooled to zero degrees.

The next day I downloaded all the images to my desktop and used PixInsight to process them. PixInsight is a really powerful tool if mastered well, and there is still a lot of learn from this great tool. I ended up with a decent image that was captured and processed 100% in Linux!


Categories: FLOSS Project Planets

Krita: illustrated beginners guide in Russian

Sat, 2014-08-02 15:46
Some time ago our user Tyson Tan (creator of Krita's mascot Kiki) published his beginners guide for Krita. Now this tutorial is also available in Russian language!

If you happen to know Russian, please follow the link :)


Categories: FLOSS Project Planets

OpenStack Summit Paris 2014 - CFS

Sat, 2014-08-02 12:04
The Call for Speakers period for the OpenStack Summit from 03. - 07.11.2014 in Paris ended this week. Now the voting for the submitted talks started and ends at 11:59pm CDT on August 6. (6:59 am CEST on 7. August).
I've submitted a talk to the storage track. The title is "Ceph Performance Analysis on Live OpenStack Platform". If you are interested to see my talk in Paris, I would appreciate if you would vote for the talk here. You can find there also the abstract of the talk.
Two of my colleagues submitted also interesting talks. Marc Koderer submitted "OpenStack QA in a nutshell" and Stefan Heuser submitted "Telekom Deutschlands Approach to NFV" to the "Telco Strategies" track.
Categories: FLOSS Project Planets

Porting to KDE Frameworks 5

Fri, 2014-08-01 07:43

Porting to KDE Frameworks 5 is so easy even I can do it.

Almost all Kubuntu software is ported already. Some of the applications even managed to go qt-only because of all the awesome bits that moved from kdelibs into Qt5. It is all really very awesome I have to say.

Categories: FLOSS Project Planets

GSoC Project Progress – Visual Effects and Placemarks

Fri, 2014-08-01 07:13

Hello everyone!

I think it’s time to talk again about my progress with my Google Summer of Code project.

But first, since there always are people who read a blog post of mine for the first time, it is important to make a short introduction about the intent of my project. This is also a way of reminding the others about what I’m working on. The application to which I contribute is Marble, a virtual globe and a world atlas. What I’m specifically working on is an Annotate Plugin which deals with controlling and rendering on Earth’s surface a bunch of graphic items. These graphic items are either placemarks, image overlays or polygons. Also, I recently started working on adding polylines, a new graphic element, which will allow the users to draw paths, but this will be the subject of a further blog post, hopefully the next one.

So, what have I been working on since my last blog post? Well, there are two main things: improving user experience for the whole editing mode and adding customization option for placemarks.

The first one, regarding the user experience improvements, includes two new features: node highlighting and a small animation when merging nodes. My mentors and me decided that node highlighting is an important visual effect when editing polygons, since it would give the user a clue about the fact that clicking the nodes actually does something: marks them as selected if left-clicked and opens a RMB (right mouse button) menu if right-clicked. This has been also the purpose of managing cursor shape in certain situations. You can watch these effects in action in the screencast below (Watch in HD – much higher quality).

I will also add some screenshots for those who want a quicker impression of the features (Click on photos for higher resolution!).

Note node highlighting and cursor shape

As far as the second visual effect which improves user experience is concerned, the merging nodes animation, it was entirely my idea and what made me think of it was the not very intuitive way nodes merging was being performed before. I thought that we need something which shows explicitly what happens when merging two nodes. You can see this animation in the following screencast:

I think it is pretty nice, what do you think? :)

The next major feature I worked on is adding customization options for placemarks. This part of the annotate plugin was totally new to me, since I hadn’t made any changes to placemarks before. As you may recall, the first thing I started my work with on this plugin was adapting an old implementation of ground overlays editing mode and then I continued with polygons editing, but I haven’t dealt with the text annotations implementation. However, my experience with the other two graphic elements and the similarities between them led to a faster development. Watch the video which highlights the placemark customization flow right below.

I will also add some screenshots for those who want a quicker impression of the features:

Note unavailable fields in the dialog


As you can see in both the screencast and these two screenshots, the edit dialog for placemarks has a couple of fields which are unavailable at the moment (label scale, ion color/scale). This is because Marble is designed in a way so that the data (such as coordinates, name, description, label/icon scale, etc) is kept apart from the objects which deal with the rendering and these objects don’t have an implementation for the unavailable options I mentioned above. However, they will be implemented soon. Also, I’m planning to implement another way of managing icons in the near future.

I hope you all like the new way to mark and describe particular places as well as the enhanced visual effects. Stay tuned for more new features to come!

Călin Cruceru


Categories: FLOSS Project Planets

Utopic Alpha 2 Released

Fri, 2014-08-01 04:21

Alpha 2 of Utopic is out now for testing. Download it or upgrade to it to test what will become 14.10 in October.

Categories: FLOSS Project Planets

Plasma Media Center – DVB Settings Interface

Fri, 2014-08-01 03:12

Hello folks \o/

It’s been a while since my last update, but the DVB implementation for Plasma Media Center (PMC) is fully functional, so from now on I am polishing the user interfaces. Up until now, I’ve been working on the settings panel (I uploaded on a previous post some early snapshots), and after a lot of playing, I think that the UI is quite mature now. Below you’ll see how the settings panel is. LibVLC automates most of the settings so this gives me the freedom not to blow the UI with too many features for the time being. I would really appreciate any feedback and thoughts about the UI. Next week I’ll upload some images (maybe a  video too) of the final UI of the main DVB-T player in action!

Last but not least, I would really like to thank you all for giving a helping hand with my “call for help”. I really got too many e-mails from you people and helped me a lot. Kudos.

 

Categories: FLOSS Project Planets

fonts in the current era

Thu, 2014-07-31 13:39

While our CPU clock speeds may not be increasing as they once did and transistor density is slowing, other elements of our computing experiencing are experience impressive evolution. One such area is in the display: screen pixel density is jumping and GPUs have become remarkable systems. We are currently in a transition between "low density" and "high density" displays and the various screens in a house are likely to have a widely varying mix of resolutions, pixel densities and graphics compute power.
In the house here we have a TV with 110 dpi, a laptop with 125 dpi, another laptop with 277 dpi and phones that top out at over 400 dpi! 
Now let's consider a very simple question: what font size should one use? Obviously, that depends on the screen. Now, if applications are welded to a single screen, this wouldn't be too much of a problem: pick a font size and stick with it. Unfortunately for us reality is more complex than that. I routinely plug the laptops into the TV for family viewing. If I get my way, in the near future our applications will also migrate between systems so even in cases where the screen is welded to the device the application will be mobile.
The answer is still not difficult, however: pick a nice base font size and then scale it to the DPI. Both of the major proprietary desktop operating systems have built this concept into their current offerings and it works pretty well. In fact, you can scale the whole UI to the DPI of the system.
This still leaves us with how to get a "reasonable font size". Right now we ship a reasonable default and allow the user to tweak this. How hard can it be, right?
Well, currently KDE software is doing two things very, very wrong when it comes to handling fonts in this modern era. Both of these can be fixed in the Frameworks 5 era if the application developers band together and take sensible action. Let's look at these two issues.Application specific font settingsWhat do Kate, Konqueror, Akregator, Dolphin, KMail, Rekonq, KCalc, Amarok, and, I presume, many other KDE applicationsKorganizer and Konsole have in common? They all allow the user to set custom fonts. For some of these applications it default to using system fonts but still allows the user to modify them. For some of these applications, they always use their own fonts. Neither is good, and the latter is just plain evil.
Kontact, being made up of several of the above applications, is a real pit of font sadness since each of its components manages its own font settings.
This wasn't such a big deal in the "old days" when everyone's screen was equally good (or equally crappy, depending on how you wish to look at it ;). In that world, the user could wade through these settings once and then never touch them again.
Today with screens of such radically different pixel densities and resolutions, the need to quickly change fonts depending on the configuration of the system is a must-have. When you have to change those settings in N different applications, it quickly becomes a blocker.
Moreover, if every application is left to its own devices with fonts, at least some will get it wrong when it comes to properly scaling between different screens. When applications start moving between devices this will become even more so the case than it is now.
The solution is simple: applications must drop all custom font settings.
Before anyone starts hollering, I'm not suggesting that there should be no difference between the carefully selected font in your konsole window and the lovingly chosen font in your email application. (I would suggest that, but I have a sense of the limits of possibilities ... ;) What I am suggesting is that every font use case should be handled by the central font settings system in KDE Frameworks. Yes, there should be a "terminal" font and perhaps one for calendars and email, too. A categorization system that doesn't result in dozens of settings but which serves the reasonable needs of all desktop applications could be arrived at with a bit of discipline.
With that in place, applications could rely on the one central system getting the font scaling right so that when the user changes screens (either connected to the device or the screen the application is running on) the application's fonts will "magically" adjust.Scaling user interface to font sizeThe idea to scale user interface to font size is one that the current Plasma team has recently decided to adopt. I can not state strongly enough just how broken this is. Watching the results when plugging plug that laptop with the 3300x1800 @277dpi screen into the 110 dpi television is enough to make baby kittens cry tears of sadness. The reason is simple: the font sizes need to scale to the screen. When they aren't scaled, the UI becomes comically large (or tragically small, depending on the direction you are going).
.. and that's ignoring accessibility use cases where people may need bigger fonts but really don't need bigger window shadows to go with it, thank you very much.
The answer here is also very simple, so simple that both Apple and Microsoft are doing it: scale the fonts and the UI to the DPI of the screen. Auto-adjust when you can, let the user adjust the scaling to their preference.
The reason Plasma 5 is not doing this is because Qt doesn't report useful DPI information. Neither does xdpyinfo. So now one might ask where I got all those DPI figures at the start of this blog entry. Did I look them up online? Nope. I used the EDID information from each of those screens as reported by xrandr or similar tools. With the help of the monitor-parse-edid tool, which is 640 lines of Perll, I was able to accurately determine the DPI of every screen in this house. Not exactly rocket science.
With DPI in hand, one can start to think about scaling font sizes and the UI as well, independently. Doesn't even require waiting for Qt to get this right, either. All the information need is right there in the EDID block.
There is a caveat: some KVMs strip the EDID information (bad KVM, bad!), older monitors may not have useful EDID info (the last version of the EDID standard was published eight years ago, so this isn't new technology) and occasionally a vendor goofs and gets the EDID block wrong in a monitor. These are edge cases, however, and should not be the reason to hold back everyone else from living life la vida loca, at least when it comes to font and UI proportions. 
In those edge cases, allowing the user to manually tweak the scaling factor is an easy answer. In fact, this alone would be an improvement over the current situation! Instead of tweaking N fonts in N places, you could just tweak the scaling factor and Be Done With It(tm). There is even a natural place for this to happen: kscreen. It already responds to screen hotplug events, allows user tweaking and restores screen-based configurations automagically, it could add scaling to the things it tracks.
If people really wanted to get super fancy, creating a web service that accepts monitors, EDID blocks and the correct scaling factors according to the user and spits out recommendations by tallying up the input would take an afternoon to write. This would allow the creation of a "known bad" database with "known good" settings to match over time. That's probably overkill, however.
The other area of edge case is when you have two screens with different DPI connected to the system at the same time. This, too, is manageable. One option is to simply recognize it as a poorly-supported edge case and keep to one global scaling value. This is the "user broke it, user gets to keep both halves" type approach. It's also the easiest. The more complex, and certainly the one that would give better results, is to have per-screen scaling. To make this work scaling needs to change on a per-window basis based on which screen it is on. This would be manageable in the desktop shell and the window manager, though a bigger ask to support in every single application. It would mean applications would need to not only drop their in-application font settings (which they ought to anyways) but to make fonts and UI layout factors a per-window thing.
If you are running right now to your keyboard to ask about windows that are on more than screen at a time, don't bother: that's a true edge case that really doesn't need to be supported at all. Pick a screen that the is "on" and scale it appropriately. Multi-screen windows can take over the crying for the kittens who will now be bouncing around with happy delight on the other 99.999% of systems in use.Build a future so bright, we gotta wear shadesThese two issues are really not that big. They are the result of some unfortunate decision making in the past, but they can be easily rectified. As it stands, the way fonts are handled completely and without question ruins the user experience so hopefully as applications begin (or complete) the process of porting to Frameworks 5 they can pay attention to this.
.. and just so nobody feels too badly about it all: all the Linux desktop software I've tested so far for these issues has failed to make the grade. So the good news is that everything sucks just as much .. and the great news is that KDE is perfectly poised to rise above the rest and really stand out by getting it right in the Frameworks 5 era.

Categories: FLOSS Project Planets

Of vectors and scalable things.

Thu, 2014-07-31 10:18
Moving

 away for my original plan, today we will be talking about Vectors.

To start this series of posts I had a main motivator, SVG.  It is a great file format, its the file type I use day in day out and the format I use the most to create all of my images…
But every so often the question about scalable UI’s and Vectors pops up. And someone will say something like “we should just use vectors to scale things”. To that, I will usually say something like, “Scalable Vectors Graphics are scalable but your screen is not“, hoping it will close the conversation just there, and it usually does.

However the above statement is only partly correct and is not the definitive reason why we should avoid off-the-shelf vectors as the source image format for our UI assets.

 

Scalable definition is a bit like being “impassioned”.

The way we define “Scalable” UI’s, as we have seen in the past posts, is very peculiar and we tend to use it the way it suits us best, ignoring the practical differences between the different meanings of the concept. Ergo, like being  impassioned, the target of our focus is more what we want it to be rather than what it really is.
This tends to produce the confusions and misunderstandings that are so common in this area, precisely because the Scalable part in SVG, is for a certain type of the referred concept, and most of the time not the type of scalable we need in UI.

So what does scalable mean for a Scalable Vector Graphic?

An SVG or any other main vector format is formed (mostly) of mathematical information about paths and its control points (Bézier curves), its visual size is only relevant in regards to the render size of the canvas it’s on, and as a result you can zoom an image almost infinitely and will never see pixels. (the pixels are just rendered for a given view area and change accordingly to the section of the vectors in that area).
This makes its a great format to scale images to really huge formats. A rendered 40000×40000 px image that is scaled down to 1000×1000 will look exactly like the image originally rendered in 1000×1000.
Now as we have seen so far, this is often not the type of scalable we want.

  • Firstly, it is X/Y dependent where we want many times X/Y independently scalable elements (think of scaling a rounded rectangle), and for those we will need something like Borderimage components.
  • Secondly, as you zoom out many elements become sub pixel and difficult to see though still there. For example, in maps we may want vectors that render differently depending on the zoom level, like making roads disappear/appear as you zoom out/in, as well as simplifying aspects of the path itself.
  • Thirdly, it ignores pixels, as we mentioned pixels are still very important, in lower definition screens we can make use of the of the pixel to create sharp contrasts, this makes your visual element “POP Out” against the background. However, SVG, in its mostly perfect mathematical description of path positions, completely ignores the rendering grid and as a result can produce unsharp elements that are off the pixel grid by decimals of a pixel. (this is only problematic in rectangular elements that align with the pixel grid disposition, but we do use it as an advantage for designs).
SVG’s in QML.

You can use SVG in QML right now as a source format, but attention, it won’t be re-rendered unless you tell it to do that, the result will be that if you do a scale or even change width height you will end up seeing pixels.  You can create bigger renders that will provide higher definitions that are more zoomable, but at the cost of it taking more time to render it and taking a lot of memory for the cached image.
Also SVGs can be very complex, I have created SVGs that take several hours to render Many of my past wallpapers for KDE were done in outline mode and I would only occasionally look at them with filters and colors on and I do have a powerful desktop to render those; trying similar things on a mobile is not a great idea.
SVG support in QT is limited, many things won’t work, the filters are mostly not working so the look will dramatically change if you expect blur-based drop shadows to work, you will not see those, the same goes for multiply filters, opacity masks, etc, etc…
So, in a nutshell, don’t use SVG as a base source image unless you know its limitations and how it works, it’s a wonderful format if you understand it’s limitations and strengths, and is sometimes useful in QML.

Vector sunset wallpaper crop several hours to render on my old linux pc.

What about other vector formats? like Fonts?

There is a special Vector format’s that we use all the time and that is also scalable, and its a format that has dealt with this problems many years ago, Fonts…

Fonts are special types of monochromatic vector paths that can have special hints to cater to pixel scalable issues. It does that via 2 methods

  • Font hinting (also known as instructing) is the use of mathematical instructions to adjust the font’s visual appearance so that it lines up with a virtual grid.
  • Font Kerning is moving the glyph on the relative x direction to achieve a more pleasing visual result and sometimes to help with the grid alignment issues…

All of this magic is done by your local font render engine and the extent to how much these operations are done depends on your local font rendering definitions…

Now, since the introduction of QML 2, the way fonts are rendered has changed, and, by default font hinting is ignored.  As a result, if you zoom into a font by animating the pixel size you get a nice smooth zoom effect, but slightly more blurry all around fonts, since they are not hinted.
QML does allow you to have native looking fonts by doing

Text { text: "Native here!" renderType: Text.NativeRendering }

How ever if you try to do a zoom effect here, via animating the pixel size, you will have a “jumpy” text feeling because, as the size increases, the hinting instructions of the font will keep on trying to adjust to an ever-changing pixel grid.
Also, the default method does a vector like scaling via the distance field method when changing the  scale: property,  where when using native rendering, you see the pixels of the 1:1 scale ratio being scaled.
Another side effect of the distance field method used by default is that if the scale/font.size is very large you start to see the inaccuracies of the method, and this is valid for any font size you pick, the method generates the distance field glyph from a small image based on the font and it’s not updated as you scale it.


Also if the font is not well formatted (glyph bounding box smaller than the glyph itself) it might clip the font in some border areas with weird visual results.

So, my advice here is: if you need readable text that you don’t want to do pinch to zoom or any other effect that changes the font size in an animated way, use the native render. If you want something dynamic then go for the default mode. Or, you can even try for a compromise solution where, when animating, you choose default and in the end turn native on. It’s mostly a matter of what you consider most important and more polished at the end of the day.
Also noteworthy is that on the higher DPI screens the hinting instructions lose a lot of their importance since the pixel size and respective sub-pixel antialiasing ‘grays’ become much smaller and relatively less important in relation to the font body. The same is true for non square pixels like many (but not all) AMOLED screens have.

Next!

Next post will return to the subject of making scalable X/Y independent elements that work well with DPI metrics…
By the way, we will be discussing this subjects at the training days of Qt Developer Days 2014 Berlin, if you are interested in this subjects Registration is here.

So see you soon, here on my next post or at DevDays.

The post Of vectors and scalable things. appeared first on KDAB.

Categories: FLOSS Project Planets

Text Splitting and Indexing

Thu, 2014-07-31 08:28

Over the last week, we have been working on improving the file searching experience in Plasma. We were mostly doing a decent job, but we were lacking in terms of proper Unicode support and making it simpler to search in non English languages. This blog post is a simplified explanation about now what goes on internally.

For the purpose of this discussion, I’m going to treat all files as blobs of text.

How indexing works?

When we’re indexing a file we typically have to take all the text and split it into words. This process is called Text Segmentation or Tokenization.

The most trivial implementation of this is just splitting on any white space. However, in practice it gets way more complex as punctuations need to be taken into account. Fortunately, there is an existing standard for this.

After obtaining the words one needs to simplify the words. Since Plasma is not dependent on one language when we do this, we need to do it in a language independent manner.

Currently we do the following -

  • Strip all diacritic marks. So words such as ‘mañana’ become ‘manana’.

  • Simplify all the characters. The moment you move away from the simple english characters, stuff gets complex. The same word can be represented in multiple ways in memory. For example the letter ñ can be written as ñ or as n + ◌̃. We want to consume the minimum number of characters to represent it.

Finally, we’re ready to store the words. We generally store them in a big table where every word corresponds to the file it was found in.

Here each file is represented by a number in order to save space.

We additionally also store where in the file every word was found. This comes at an expensive cost as with Xapian storing positional information doubles your database size. This means slower indexing and more IO consumption.

How searching works?

The initial part of search process is quite similar to the indexing process. When we get a string to search for, we split it up into words, and then simplify each word in the exact fashion we did when indexing those words.

After this we simply lookup each of the words in the table and return the set of files which matched every word.

For example if we were searching for the words árk Zombie in the above table. It would look as follows.

ark AND zombie -> (1, 3, 8) AND (6, 8) -> 8.

Phrases

The explanation above works for simple words, but the moment you bring in more complex words, stuff starts to get a little messy.

Imagine searching for an email address vhanda@kde.org. This would be split into 3 words vhanda, kde and org We could just search for these 3 words, but that’s not exactly what the user expected. They expect these words to appear in that exact order. This is where the positional information that we stored during indexing is used. We now search for those 3 words but we make sure they appear consecutively.

This does give some minor false positives such as a document containing the text “vhanda kde org”. But in general, it gives us what we want. It also allows users to explicitly search for words appearing consecutively.

Filtering vs Searching

Searching on the Desktop is quite different than searching on the web. Not only are we expected to be much faster, the wealth of information available is much smaller. This results in users expecting searches to work as a filter.

When searching on the web, one generally types the full word. On the desktop, however, depending on the feedback one will only type a part fo the word.

Example: Say searching for a file with the name Dominion - The Flood. One can expect the user to start typing Dom see many other results pop up and then type flood in order to get the desired file. They might never actually type the full word dominion.

Searching by typing only parts of the word gets more complex from an implementation point of view. We only have a mapping from (word) -> (file). So in order to search for a part of the word, we need to iterate over the table and look for every word which starts with that prefix. This makes the query quite long.

Example: Searching for Fi rol might expand to (fi OR fight OR fill finger OR fire) AND (rol OR role OR roller OR rollex)

This whole method of expanding the prefix to every word breaks down when the word is extremely small. Depending on your index expanding one word could result in over 10000 words. Practically, it results in results much much larger than 10000, and that makes the query slower and consumes a crazy amount of memory to represent the query. In these cases we typically try to guess which words occur more frequently than others and only expand the word to the most frequently occurring words.

So, what’s changed?

With Plasma 5.1, we’ve moved away from using Xapian’s internal Query Parser and word segmentation engine. We’re using our own custom implementation in Qt.

This gives us more control over the entire process, it makes it more testable as we have unit tests for every condition, and lets us modify it in custom ways such as splitting on _, removing diacritic marks and expanding every word when searching for queries.

Categories: FLOSS Project Planets

GSoC Status Report: Code Completion features

Wed, 2014-07-30 12:54

Context: I'm currently working on getting Clang integration in KDevelop into usable shape as part of this year's Google Summer of Code. See the initial blog post for my road map.

While we had basic support for code completion provided by Clang from the beginning (thanks to David Stevens for the initial) work, it still didn't really feel that useful in most cases. In the last two weeks I've spent my time streamlining the code completion features provided by Clang.

This blog post is going to be full of screenshots showing the various features we've been working on lately.

Task recap: Code completion
  • Code completion: Implement “virtual override completion”. Also the automatic replacement of . to -> and vice-versa in case of pointer/non-pointer types is still missing.

  • “Implement function” feature: If a function is only declared but not defined, we offer a "implement function" helper item in the code completion. This is currently not yet ported to clangcpp.

  • "Switch to Definition/Declaration” feature: If the cursor is at some declaration in a header file, KDevelop offers a shortcut to automatically switch to its definition in the source file (opening the corresponding file in the active view). This is not yet possible in clangcpp.

  • Show viable expressions for current context: When inside a function call or constructor call, show viable expressions which fit the signature of the current argument. Example: int i = 1; char* str = “c”; strlen( – this should show variable str in the completion popup as best match.

  • Include completion: Oldcpp offers completion hints when attempting to #include some file, port this to clangcpp.

Achievements Virtual override completion Simple case

When in a class context, we can now show completion items for methods that are declared virtual in the base class.

KDevelop showing the "virtual override helper". By pressing Ctrl+Space inside the derived class, KDevelop will propose overriding virtual functions from the base class

By pressing Enter now, KDevelop automatically inserts the following code at the current cursor position:

virtual void foo() Oh, no! Templates!

We've spent a bit of work to make this feature work with templated base classes, too. Have a look at this:

KDevelop showing the "virtual override helper". KDevelop knows the specialized version of the virtual method in the base-class and proposes to reimplement it

Nice, right?

Implement function helper

When encountering an undefined method which is reachable from within the current context, KDevelop offers to implement those via a tooltip

KDevelop showing the "implement function helper". By pressing Ctrl+Space in an appropriate place, KDevelop offers to implement undefined functions (this also works for free functions, of course)

By pressing Enter now, KDevelop automatically inserts the following code at the current cursor position:

void Foo::foo() { }

This works for all types of functions, be it class member functions or free member functions and/or functions in namespaces. Since this is mostly the same code path as the "virtual override helper" feature, this plays nicely with templated functions, too.

"Switch to Definition/Declaration” feature

Sorry, no pictures here, but be assured: It works!

Pressing Ctrl+, ("Jump to Definition") while having the cursor on some declaration will bring you to the definition. Consecutively, pressing Ctrl+. ("Jump to Declaration") on some definition will bring you to the declaration of that definition.

Show viable expressions for current context Best matches KDevelop showing completion items when calling a function. KDevelop offers all declarations that are reachable from and useful for the current context. In addition to that, best matching results are put to the front. As you can see variable str gets a higher "match" than variable i.

This is some of the features we actually get for free when using Clang. We get the completion results by invoking clang_codeCompleteAt(...) on the current translation unit and iterating through the results libclang is offering us. Clang gives highly useful completion results, the LLVM team did an amazing job here.

Another example: Enum-case completion KDevelop showing completion items when in a switch-context and after a 'case' token. KDevelop is just offering declarations that match the current context. Only enumerators from SomeEnum are shown here.

You can play around with Clang's code completion ability from the command-line. Consider the following code in some file test.cpp:

enum SomeEnum { aaa, bbb }; int main() { SomeEnum e; switch (e) { case // ^- cursor here } }

Now do clang++ -cc1 -x c++ -fsyntax-only -code-completion-at -:7:9 - < test.cpp and you'll get:

COMPLETION: aaa : [#SomeEnum#]aaa COMPLETION: bbb : [#SomeEnum#]bbb

Awesome, right?

Issues: Too many code completion result from Clang

One thing I've found a bit annoying about the results we're getting is that Clang also proposes to explicitly call the constructors/destructors or assignment operators in some cases. Or in other words: It proposes too many items

Consider the following code snippet:

struct S { void foo(); }; int main() { S s; s. //^- cursor here }

Now doing clang++ -cc1 -x c++ -fsyntax-only -code-completion-at -:8:7 - < test.cc results in:

COMPLETION: foo : [#void#]foo() COMPLETION: operator= : [#S &#]operator=(<#const S &#>) COMPLETION: S : S:: COMPLETION: ~S : [#void#]~S()

Using one of the last three completion results would encourage Clang to generate code such as s.S, s.~S or s.operator=. While these constructs point to valid symbols, this is likely undesired.
Solution: We filter out everything that looks like a constructor, destructor or operator declaration by hand.

So, in fact, what we end up showing the user inside KDevelop is:

KDevelop showing completion items after a dot member access on the variable s. KDevelop is just offering useful declarations, hiding all undesired results from Clang.

Just what you'd expect.

Wrap-Up

Code completion features are mostly done (at least from the point-of-view of what Clang can give us here).

Still, there other interesting completion helpers that could^Wshould be ported over from oldcpp to kdev-clang, such as Olivier's lookahead-completion feature (which I find quite handy). This is not yet done.

I'm writing up yet another blog post which is going to highlight some of the other bits and pieces I've been busy with during the last weeks.

Thanks!

Categories: FLOSS Project Planets

Logging in into Picasa 3.9 under Linux

Tue, 2014-07-29 16:50

A few years ago I showed my father Picasa under Linux, he liked it and started to use it to upload his photos, and has been using it for almost 6 years, even Google discontinued Picasa for Linux at version 3.0 (Picasa is at 3.9 now).

Unfortunately a few weeks ago seems Google decided to kill support for old APIs in the server side and Picasa 3.0 for Linux was giving back an error when trying to upload an image ("Could not find POST url" or similar). I suggested to wait to see if they would come back, but it seems they haven't and so i've had to fix it for him.

Since he's heavily invested in Picasa I've had to install Picasa for windows under wine to make it work. It has not been trivial to get to work so I'll share it here for others that committed the error of trusting privative software and services.

The story is this: Installing picasa 3.9 for windows under wine is pretty easy (next, next, next). The problem is once you are running it, being able to log in. First problem is that the webview using for login doesn't even show. Most of the interwebs suggest installing ie8 using winetricks to solve that and it indeed solves the problem of the webview not showing, but still i can't log in (interestingly the webview will tell you if you wrote the password wrong).

At this point i was stuck for a few hours, even found some dude that claimed he had installed Google Chrome Frame for Internet Explorer and that had fixed for him. But not for me.

After a few hours, I stopped trusting the internet and started to think. I have a windows installation laying around, and i can log in from there, and once logged in Picasa does not ask for the password again, so it must be storing something no?

So I made a copy of the Program Files folder and compared it after loggin in, folders where exactly the same. So it was not stored there, which makes sense since log in is per user not per machine. Next i tried in that weird Personal Folder (Windows $HOME) but could not find any change either. Last chance was the registry, i used http://www.nirsoft.net/utils/reg_file_from_application.html and saw that when logging in, Picasa writes a few entries in HKEY_CURRENT_USER\Software\Google\Picasa\Picasa2\Preferences namely GoogleOAuth, GoogleOAuthEmail, GoogleOAuthServices and GoogleOAuthVersion, so I copied these over to the wine installation (with "wine regedit") and now my father can run Picasa just fine again.

Lessons learned:
* Non Free Software will eventually come back and hit you, if possible don't use it for stuff that is critical to you
* Think about your problem, sometimes is easier than just googling random instructions from the internet.

Categories: FLOSS Project Planets

ownCloud 7 Release Party August 8, Berlin

Tue, 2014-07-29 13:09
In a little over a week, on the 8th of August, you're all invited to join Danimo, Blizz and myself at a release party to celebrate the awesomeness that is ownCloud 7 in Berlin!



When and whereWe will gather at 7pm at the Wikimedia office in Berlin:
Tempelhofer Ufer 23/24
10963 Berlin
Germany
It is awesome that we can use their office, a big thank you to our fellow data lovers!!

So we start to gather at 7 and round 7:30 we'll have a demo of/talk about ownCloud 7. We will order some pizza to eat. After that: party time!






Categories: FLOSS Project Planets

Rohan on ubuntuonair.com

Tue, 2014-07-29 11:07

Kubuntu Ninja Rohan was on today’s ubuntuonair talking about Plasma 5 and what is happening in Kubuntu.  Watch it now to hear the news.

 

Categories: FLOSS Project Planets

YouID Identity Claim

Tue, 2014-07-29 08:54

di:sha1;eCt+TB1Pj/vgY05nqB48sd1seqo=?http=trueg.selfhost.eu%3A8899


Categories: FLOSS Project Planets

[GSoC'14]: Chronicle of a hitchhiker’s journey so far

Tue, 2014-07-29 06:43

nuqneH [Klingon | in English- "Hello"], I am Avik [:avikpal] and this summer I got the opportunity to work with Andreas Cord-Landwehr [:CoLa] to contribute to the KDE-Edu project Artikulate. My task is to implement a way so as to tell a learner how well his/her pronunciation is compared to a native speaker.

Let me warn you about a couple of stuff beforehand; firstly, the trailing post is going to be a bit lengthy to read but I have tried to keep things interesting, secondly, I have a habit of addressing people by their IRC nicks though I have tried to put their real name as well ;)

So let me dive right into what I have been doing for the last couple of months. The first thing I had to do was to port Artikulate to QtGStreamer 1.0. The API changes in QtGStreamer mainly follow the changes performed in GStreamer 1.0. The biggest change is that Gst::PropertyProbe is gone or in our case QGst::PropertyProbePtr is gone which results in a compilation error. So the related code had to be adapted i.e. worked around to do the same. I got some great insights and tips from George Kiagiadakis [:gkiagia] and Diane Trout [:detrout] at #qtgstreamer and finally resolved this.

But still I was getting a runtime error because of Artikulate linking to both libgstreamer-1.0.so.0 and libgstreamer-0.10.so.0. It is a very common problem as GStreamer does not use symbol versioning and in some cases programs end up linking both of them through indirect shared library dependencies. I used pax-utils and lddtree (thanks to CoLa for telling me about these two great tools) to find out the cause of the linking error. Actually libqtwebkit.so.4 links the GStreamer 0.10 shared library as its dependency. CoLa got libqtwebkit built against GStreamer 1.0 and did some code changes and refactoring.

Also we decided against keeping phonon multimedia backend and Artikulate now supports only GStreamer backend. Precisely with Artikulate we are at QtGstreamer 1.2 and for the last few days the CI system also has it. This is just a heads up- I will let CoLa share the details of this work himself so stay tuned.

For pronunciation comparison I had initially decided to generate fingerprint of the audio file and then compare the two fingerprints (i.e. learner pronunciation and native pronunciation). Most of the phrases available with the trainer have one/two-syllable and are around 4-5 seconds in duration. The present chromaprint APIs don’t generate distinguishable fingerprints for audio of such low duration. I talked to Lukas Lalinsky from the Accoustid about how can the Chromaprint library be tweaked so as to get distinguishable fingerprints for small duration audio files. Chromaprint does a STFT analysis (FFT over a sliding window) and the window size and overlap determines how much data the algorithm generates. I went on trying to better the results by tweaking with the library but it was giving me only erratic data.

This was the time when I decided that it would be prudent to start working on writing a very basic audio fingerprint generator to cater my purpose. The concept is well discussed and illustrated in numerous papers and blogs so it wasn’t hard to break it up into modules.

The first job was to generate a spectrogram of the audio clip. I used the sox API to generate a spectrogram- the following system illustrates such a spectrogram.

Spectrogram of ‘European Union’ pronounced by me in Bengali

Next I wrote a code to find the peaks in amplitude where peak is a (time, frequency) pair corresponding to an amplitude value which is the greatest in a local neighborhood around it. Other pairs around it are lower in amplitude, and  thus are less likely to survive noise.

My next job is to group these neighborhood peaks into collections/beans and then use a hash function to get the final fingerprint. I am currently working on implementing this part.

Now to get the peaks out of the spectrogram I first found out the histogram of the image and there came an idea to see how different are the histograms of spectrograms of two pronunciation are. There are several statistical ways to compare histograms and so far the results that I have found are quite promising. I shall try to demonstrate using an example.

I asked CoLa for audio recordings of the word “weltmeisterschaft” [World Champion in English] and he send me several recordings- let me take a couple of those.

And its spectrogram looks like this-

CoLa’s pronunciation (sample 1)

And this is another sample from CoLa

And its spectrogram looks like-

CoLa’s pronunciation (sample 2)

It may be noted that in above two spectrograms there is only a linear shift by a small amount which is expected and desired.

Before giving examples of my pronunciations let me clarify how I have compared the two histograms. To compare two histograms (H1 and H2), first we have to choose a metric (d(H1,H2)) to express how well both histograms match. I have computed four different metric to compute the matching: Correlation, Chi-Square, Intersection and Bhattacharya distance.

Next I present to you my first attempt at pronouncing “weltmeisterschaft”

Yeah I admit, though my first obviously I could have done better and it sounds kind of like “wetmasterschaft”. And here is the reportcard (err….spectrogram) of my poor performance

My first attempt

But I was not ready to give up yet…. I made some disgusting though necessary gurgling sounds and tried to set my vocals into tune and this is what I came up with.

and its spectrogram looks like

weltmeisterschaft- by me after a few attempts

Now I shall show you how the comparison metrics stack up- for correlation and intersection methods, the higher the metric the more accurate the match and for the other two the less the metrics the better the match.

*it is actually a comparison between the two same pronunciation by CoLa with which the rest are compared- this is just to give a sense of accuracy achievable.

The next job is to converge on a single metric which will take into account all four metrics that I currently have. Meanwhile I will also work on the fingerprinting part as it would also enable it to point out specific parts of pronunciation in which further improvement is needed. I am working on removing noise from the spectrograms as it is needed in finding out the intensity peaks(part of the fingerprinting work)- I have finished writing a code to find an intensity threshold for the noise from the histogram.

Below is a histogram of the spectrogram of my somewhat better pronunciation of “weltmeisterschaft”-

Histogram- different colours depict different channels

I hope to club these all modules in a standalone application and share it with community members for their testing, meanwhile you may use the code at https://github.com/avikpal/noise-removal-and-sound-visualization and test it yourself.

Now its time to fire up the warp machine but even in a parallel universe too I will be eagerly listening to #kde-artikulate with my identifier being “avikpal” for any kind of suggestion and/or queries. You may also mail me at avikpal[dot]me[at]gmail[dot]com.

Qapla’![Klingon | in English- "Good-bye"] until next time.


Categories: FLOSS Project Planets