Planet KDE

Syndicate content
Planet KDE - http://planetKDE.org/
Updated: 1 day 18 hours ago

Season of KDE – January

Sun, 2015-02-01 02:29

I am happy to say that I successfully completed making the StarHopper UI for KStars. In my previous blog post, I have given an overview of the feature and documented on its usage. My latest commit ensured that when the StarHopper algorithm returned an empty list, the UI would check for it and then display an error message.

Apart from this, I also worked on developing a Unit Spin-Box widget which contains a Double Spin-Box and a Combo-Box. Units are hard-coded in KStars and through this widget, a unit can be added along with the conversion factor and the widget will return the converted value for the respective unit. This patch was the first commit I committed on my own instead of sending it to my mentor. Git still haunts me though :/

Currently I’m working on the Observing planner optimizer. It requires me to learn QML (sometimes the fact that I can’t start with small letter for import names makes me want to pull my hair off) and integrate it with C++. Due to my academic constraints, I’ve not been able to dedicate lot of time for this but I hope to spend more time once my exams are done.

I think the taking point from me from this experience of Season of KDE is that my knowledge base has expanded so much! Also, the community is a great one and I am extremely happy on being part of it. The people are very helpful and nice, it’s just amazing! :D

Cheers! :)

 


Categories: FLOSS Project Planets

GNU i18n for high priority projects list

Sat, 2015-01-31 17:45

Today,  for a special occasion, I’m hosting this guest post by Federico Leva, dealing with some frequent topics of my blog.

A special GNU committee has invited everyone to comment on the selection of high priority free software projects (thanks M.L. for spreading the word).

In my limited understanding from looking every now and then in the past few years, the list so far has focused on “flagship” projects which are perceived to the biggest opportunities, or roadblocks to remove, for the goal of having people only use free/libre/open source software.

A “positive” item is one which makes people want to embrace GNU/Linux and free software in order to user it: «I want to use Octave because it’s more efficient». A “negative” item is an obstacle to free software adoption, which we want removed: «I can’t use GNU/Linux because I need AutoCAD for work».

We want to propose something different: a cross-fuctional project, which will benefit no specific piece of software, but rather all of them. We believe that the key for success of each and all the free software projects is going to be internationalization and localization. No i18n can help if the product is bad: here we assume that the idea of the product is sound and that we are able to scale its development, but we “just” need more users, more work.

What we believe

If free software is about giving control to the user, we believe it must also be about giving control of the language to its speakers. Proper localisation of a software can only be done by people with a particular interest and competence in it, ideally language natives who use the software.

It’s clear that there is little overlap between this group and developers; if nothing else, because most free software projects have at most a handful developers: all together, they can only know a fraction of the world’s languages. Translation is not, and can’t be, a subset of programming. A GNOME dataset showed a strong specialisation of documenters, coders and i18n contributors.

We believe that the only way to put them in control is to translate the wiki way: easily, the only requirement being language competency; with no or very low barriers on access; using translations immediately in the software; correcting after the fact thanks to their usage, not with pre-publishing gatekeeping.

Translation should not be a labyrinth

In most projects, the i18n process is hard to join and incomprehensible, if explained at all. GNOME has a nice description of their workflow, which however is a perfect example of what the wiki way is not.

A logical consequence of the wiki way is that not all translators will know the software like their pockets. Hence, to translate correctly, translators need message documentation straight in their translation interface (context, possible values of parameters, grammatical role of words, …): we consider this a non-negotiable feature of any system chosen. Various research agrees.

Ok, but why care?

I18n is a recipe for success

First. Developers and experienced users are often affected by the software localisation paradox, which means they only use software in English and will never care about l10n even if they are in the best position to help it. At this point, they are doomed; but the computer users of the future, e.g. students, are not. New users may start using free software simply because of not knowing English and/or because it’s gratis and used by their school; then they will keep using it.

With words we don’t like much, we could say: if we conquer some currently marginal markets, e.g. people under a certain age or several countries, we can then have a sufficient critical mass to expand to the main market of a product.

Research is very lacking on this aspect: there was quite some research on predicting viability of FLOSS projects, but almost nothing on their i18n/l10n and even less on predicting their success compared to proprietary competitors, let alone on the two combined. However, an analysis of SourceForge data from 2009 showed that there is a strong correlation between high SourceForge rank and having translators (table 5): for successful software, translation is the “most important” work after coding and project management, together with documentation and testing.

Second. Even though translation must not be like programming, translation is a way to introduce more people in the contributor base of each piece of software. Eventually, if they become more involved, translators will get in touch with the developers and/or the code, and potentially contribute there as well. In addition to this practical advantage, there’s also a political one: having one or two orders of magnitude more contributors of free software, worldwide, gives our ideas and actions a much stronger base.

Practically speaking, every package should be i18n-ready from the beginning (the investment pays back immediately) and its “Tools”/”Help” menu, or similarly visible interface element, should include a link to a website where everyone can join its translation. If the user’s locale is not available, the software should actively encourage joining translation.

Arjona Reina et al. 2013, based on the observation of 41 free software projects and 22 translation tools, actually claim that recruiting, informing and rewarding the translators is the most important factor for success of l10n, or even the only really important one.

Exton, Wasala et al. also suggest to receive in situ translations in a “crowdsourcing” or “micro-crowdsourcing” limbo, which we find superseded by a wiki. In fact, they end up requiring a “reviewing mechanism such as observed in the Wikipedia community” anyway, in addition to a voting system. Better keep it simple and use a wiki in the first place.

Third. Extensive language support can be a clear demonstration of the power of free software. Unicode CLDR is an effort we share with companies like Microsoft or Apple, yet no proprietary software in the world can support 350 languages like MediaWiki. We should be able to say this of free software in general, and have the motives to use free software include i18n/l10n.

Research agrees that free software is more favourable for multilingualism because compared to proprietary software translation is more efficient, autonomous and web-based (Flórez & Alcina, 2011; citing Mas 2003, Bowker et al. 2008).

The obstacle here is linguistic colonialism, namely the self-disrespect billions of humans have for their own language. Language rights are often neglected and «some languages dominate» the web (UNO report A/HRC/22/49, §84); but many don’t even try to use their own language even where they could. The solution can’t be exclusively technical.

Fourth. Quality. Proprietary software we see in the wild has terrible translations (for example Google, Facebook, Twitter). They usually use very complex i18n systems or they give up on quality and use vote-based statistical approximation of quality; but the results are generally bad. A striking example is Android, which is “open source” but whose translation is closed as in all Google software, with terrible results.

How to reach quality? There can’t be an authoritative source for what’s the best translation of every single software string: the wiki way is the only way to reach the best quality; by gradual approximation, collaboratively. Free software can be more efficient and have a great advantage here.

Indeed, quality of available free software tools for translation is not a weakness compared to proprietary tools, according to the same Flórez & Alcina, 2011: «Although many agencies and clients require translators to use specific proprietary tools, free programmes make it possible to achieve similar results».

We are not there yet

Many have the tendency to think they have “solved” i18n. The internet is full of companies selling i18n/10n services as if they had found the panacea. The reality is, most software is not localised at all, or is localised in very few languages, or has terrible translations. Explaining the reasons is not the purpose of this post; we have discussed or will discuss the details elsewhere. Some perspectives:

A 2000 survey confirms that education about i18n is most needed: «There is a curious “localisation paradox”: while customising content for multiple linguistic and cultural market conditions is a valuable business strategy, localisation is not as widely practised as one would expect. One reason for this is lack of understanding of both the value and the procedures for localisation.»

Can we win this battle?

We believe it’s possible. What above can look too abstract, but it’s intentionally so. Figuring out the solution is not something we can do in this document, because making i18n our general strength is a difficult project: that’s why we argue it needs to be in the high priority projects list.

The initial phase will probably be one of research and understanding. As shown above, we have opinions everywhere, but too little scientific evidence on what really works: this must change. Where evidence is available, it should be known more than it currently is: a lot of education on i18n is needed. Sharing and producing knowledge also implies discussion, which helps the next step.

The second phase could come with a medium term concrete goal: for instance, it could be decided that within a couple years at least a certain percentage of GNU software projects should (also) offer a modern, web-based, computer-assisted translation tool with low barriers on access etc., compatible with the principles above. Requirements will be shaped by the first phase (including the need to accommodate existing workflows, of course).

This would probably require setting up a new translation platform (or giving new life to an existing one), because current “bigs” are either insufficiently maintained (Pootle and Launchpad) or proprietary. Hopefully, this platform would embed multiple perspectives and needs of projects way beyond GNU, and much more un-i18n’d free software would gravitate here as well.

A third (or fourth) phase would be about exploring the uncharted territory with which we share so little, like the formats, methods and CAT tools existing out there for translation of proprietary software and of things other than software. The whole translation world (millions of translators?) deserves free software. For this, a way broader alliance will be needed, probably with university courses and others, like the authors of Free/Open-Source Software for the Translation Classroom: A Catalogue of Available Tools and tuxtrans.

“What are you doing?”

Fair question. This proposal is not all talk. We are doing our best, with the tools we know. One of the challenges, as Wasala et al. say,  is having a shared translation memory to make free software translation more efficient: so, we are building one. InTense is our new showcase of free software l10n and uses existing translations to offer an open translation memory to everyone; we believe we can eventually include practically all free software in the world.

For now, we have added a few dozens GNU projects and others, with 55 thousands strings and about 400 thousands translations. See also the translation interface for some examples.

If translatewiki.net is asked to do its part, we are certainly available. MediaWiki has the potential to scale incredibly, after all: see Wikipedia. In a future, a wiki like InTense could be switched from read-only to read/write and become a über-translatewiki.net, translating thousands of projects.

But that’s not necessarily what we’re advocating for: what matter is the result, how much more well-localised software we get. In fact, MediaWiki gave birth to thousands of wikis; and its success is also in its principles being adopted by others, see e.g. the huge StackExchange family (whose Q&A are wikis and use a free license, though more individual-centred).

Maybe the solution will come with hundreds or thousands separate installs of one or a handful software platforms. Maybe the solution will not be to “translate the wiki way”, but a similar and different concept, which still puts the localisation in the hands of users, giving them real freedom.

What do you think? Tell us in the comments.

Categories: FLOSS Project Planets

Three Laws for successful “Google Summer of Code”-like projects

Sat, 2015-01-31 17:29

(This post is aimed at mentors and organization coordinators of internship programs like Google Summer of Code. If you’re a student, you’re probably more interested in how to write a successful proposal, in which case, you better head over to this great post by Teo Mrnjavac.)

It’s a very busy time for me, I’m in thesis crunch mode and applying for jobs at the same time. This is why this year I have decided to step down from my role as organizer of internship programs at ownCloud.

In the past two years I have coordinated and mentored projects for Google Summer of Code (GSoC), Outreach Program for Women and Google Code-In, and in the two years before that I have been a Season of KDE and a GSoC student myself. In all these fantastic experiences, I had the chance to observe some patterns that recur in successful projects. As the administrators of these programs repeatedly point out, success starts from the project ideas page. I agree with this, and in fact I believe that if the description of your project idea obeys some simple criteria, your mentorship will be an enjoyable experience (with very high probability).

This is the time of the year when mentoring organizations are supposed to get their ideas page ready, so I thought it would a good time to share some advices on how to write a good project description. I have synthesized the patterns I observed in a set of three laws — the term “law” here is in the sense of “adage”, as in Asimov’s laws or Atwood’s law. By the way, notice I refrained from titling this post »Cosentino’s laws«. You are welcome! :)

Without further ado:

First law. A project is always twice as big as you think it is.

Second law. You must provide (at least) a partial specification and a prototype to your mentee of what you want to be implemented.

Third law. If you think an issue on the bug tracker makes up a nice project idea, then something is wrong with that issue.

 

Even though they’re all pretty much self-explanatory, let me comment more in detail on each law.

The first one can probably be applied more generically to many aspects of life, and still it’s a rule we always forget to keep in mind when we assign a task to someone else. The factor with which a project eventually scales up varies, in software I observed that 2x is a good approximation. If you have already written a description for the project you want to mentor this Summer, a no-brainer to make it successful is going back to the project description and slicing it in half (do not recurse many times :P).

When you read the second law, you may raise the point that an intern will gain a lot of experience by writing a specification and a prototype of the project by himself. In fact, you may argue that somebody who wants to become a good software engineer should learn these important skills and not just how to monkey-code. There are many problems with that. First, writing specification is hard and writing a prototype is even harder. The time span of a project is usually 3-4 months and that’s not enough for everything. The intern will only enjoy his project if he sees code running on his machine. It’s true that ideally you want the intern to do not only coding, but also specification, prototype, documentation, … However, that is impossible, and if you have to cut off something, you can’t cut off the coding part. Furthermore, only when the mentor writes a formal specification by himself, he will figure out many details of the projects, such as size, feasibility and usefulness, which may slip out if he only writes a succinct description of the idea.

The third law is perhaps even more arguable, as it depends on how your organization handles the bug-tracker. Here I am claiming that if a bug or a feature request has the same size of a student Summer project, then you should split that in smaller issues, or remove it from the bug-tracker, because you have very small chances that somebody will fix it. Plus, bug-tracker discussions can be very technical for someone new to the organization. Tag issues that you think are good for first-time contributors as »Junior jobs«, but don’t use them as project ideas, use them as warm-ups and to select your interns.

I ask everyone to help with ownCloud’s Project ideas page. Let’s make sure each project obeys the three laws ;-)

If you have questions concerning GSoC and other student/internship programs at ownCloud, contact ownCloud community manager Jos (@jospoortvliet) and organizers Jan (@jancborchardt) and Thomas (@DeepDiver1975).

 

Categories: FLOSS Project Planets

Sometimes you need to reinvent the wheel

Sat, 2015-01-31 13:37

On behalf of the Calamares team and Blue Systems, I am proud to announce the immediate availability of Calamares 1.0.

Calamares is a distribution independent installer framework. I had the initial idea for Calamares in May 2014, less than a year ago, and out of frustration: many successful independent Linux distributions came with lackluster installers, and all of these installers were a result of competition rather than cooperation. Improving one of the existing installers wouldn’t have fixed this, as every installer was more or less distribution specific. I wanted to create a product that would satisfy the requirements of most Linux distributions, developed as an upstream project for all of them.

With support from Blue Systems and some help from Aurélien Gâteau I started from scratch around June 2014, with a highly modular design and some valuable contributions from KaOS, Manjaro, Maui and Netrunner developers. Contributors from Fedora, BBQLinux, OpenMandriva and the KDE Visual Design Group joined in afterwards. Now, a little over half a year of design and development frenzy later, we choose to call it 1.0. While there is still room for improvement, we have decided that the first development iteration is done, and we are presenting a modest yet feature-complete product.

Calamares is built with Qt 5, C++11, Boost.Python, (bits of) KDE Frameworks 5 and KDE Partition Manager.

Feature highlights include:

  • a completely modular design, with three plugin interfaces: C++, Python and generic process;
  • a threaded job executor, with C++ and Python API;
  • a collection of over 25 modules, ranging from boot loader support to partitioning, to user management and much more, with the opportunity of deploying your own;
  • a self-contained branding component mechanism, which allows distributions to ship a consistent user experience without patching;
  • an advanced partitioning tool, with automation and resize functionality and both DOS and GPT partition table support.

The Calamares team hangs out in #calamares on Freenode, feel free to drop by.

Bugs should be reported to our issue tracker, and pull requests go to GitHub.

Categories: FLOSS Project Planets

Introducing Dirty Presets, Locked Brush Settings and Cumulative Undo in Krita 2.9.

Sat, 2015-01-31 10:30

One of the 2014 Google Summer of Code projects for Krita is going to be in the next release, Krita 2.9. It’s a bit complicated, so here’s a short tutorial in using Mohit’s Dirty Presets, Locked Brush Settings and Cumulative Undo projects!

1. Dirty Presets

This is a feature a lot of people asked for: It allows Krita to remember small changes made to a preset during a session, without it saving over the original.
You activate it in the brush settings window, by ticking ‘Temporarily Save Tweaks To Presets’.

Then, select a preset.

Now, if you tweak a setting, like, say, opacity, Krita will make the preset ‘dirty’. You can identify dirty presets by the little plus-icon on the preset icon.

To get the original settings back, press the reload button.

To retain these settings, just save the preset.

2. Locked Brush Settings.

Another often requested feature, this allows you to lock to opacity, or brush-tip, or even texture.

You activate it by right-clicking the lock besides a setting. Then, select ‘lock’.

Now, the setting will not be reloaded every time you select a new preset.

This can be used, for example, to keep the same texture over all presets.

You can unlock them by right-clicking the lock-icon again.


There’s two options here.

Unlock (Drop Locked)
This will get rid of the settings of the locked parameter and take that of the active brush preset. So if your brush had no texture on, using this option will revert it to having no texture.
Unlock (Keep Locked)
This will keep the settings of the parameter even though it’s unlocked.

Finally, the last one.

3. Cumulative undo.

Cumulative undo allows you to have undos merge together. This can be useful if you’re the type to make a lot of tiny strokes, or to save memory.

Cumulative undo is activated via the Undo History Docker. Right-click an undo-state to enable it.

Afterwards, you can tweak it’s settings by right-clicking the undo-state again.

Start merging time
The amount of seconds required to consider a group of strokes mergable. So if this is set to five, at the least five seconds must have passed for Krita to consider these strokes mergable.
Group time
The amount of time it needs between strokes for Krita to consider the next stroke to be part of a new group. So if it’s set to 1, Krita will put strokes that were made more than 1 second after the first into a new merged group.
Split strokes.
The minimum amount of last strokes that stay undoable without being merged. So if you have this set to 3, and make five strokes, only the two oldest ones will be merged.

Disable this by right-clicking an undo-state and disabling it.
After this, you can start undoing large sets of strokes! The merged items will be represented with ‘merged’ behind their name in the undo history.

Categories: FLOSS Project Planets

KDE Developer Meetings are Not Easy

Fri, 2015-01-30 02:09

You thought you had seen every post about Randa Meetings in 2014, right? Or perhaps you posted about Randa and thought you were super late? Well, abandon all hope: this is the definitive latest post about Randa Meetings 2014 :D

The title may not be surprising to you if you have organized a developer meeting before or if you have attended one in which you are close to the ogranizers (e.g. as a volunteer or as a curious person). But coming from a country in Latin America where no serious developer meetings happen, and reading about KDE sprints here and there all the time, you might have an impression that those things happen out of the blue. In my case, I thought there was some sort of machine sprint organizers turned on, put in some money and it would spit out a sprint. Notice that I don’t consider Akademy a developer sprint, so I was very much aware Akademy is an organizational nightmare challenge.

But attending Randa Meetings 2014 changed my perspective about organizing sprints. Starting with the fundraiser, then the coordination of arrivals, and other things we could follow through e-mail and blogs, I was able to see that this organization was big. But that was just part of it. You would have to arrive to Randa and see how the organizers were taking care of details like network access, food, name tags, food, coordination of visit days, food, our field trip and most importantly, and I think I have not mentioned this before… food. This was, of course, the result of the collaboration of many, not only Mario Fux, who is the usual name associated to Randa Meetings. In particular, I have to thank Mario’s family for all the food, which I think was an important part of the meeting and I haven’t mentioned it. I was able to discuss many matters of the organizational details with Mario while we walked from Zermatt to Randa, and this helped me understand the dimensions of organizing a sprint like that one.

KDE has many sprints throughout the year and my experience in Randa has helped me appreciate both the importance and the challenge of organizing these. BlueSystems office in Barcelona has been really helpful in making it easier for people to organize these sprints, but there is still a lot of manual work organizers need to do. You can help and be part of this by donating to KDE.

 

Categories: FLOSS Project Planets

Getting ready to leave for FOSDEM 2015

Thu, 2015-01-29 15:08

It’s almost time to leave and I can’t wait to see my friends from the OpenSource world again and for my first experience at FOSDEM.

I’ve gotten my luggage ready and I’m (almost) ready to leave.

The KDE T-shirts packed and ready for the road.

Route to FOSDEM.

And I’ve prepared my best and most representative T-shirts.

 

FOSDEM, Here I come!

 

Categories: FLOSS Project Planets

The Initiation

Thu, 2015-01-29 05:30




Hello guys,

It has been almost 2 months since I last wrote about my experiences. The past two months have been quite hectic as I was trying my best to learn and implement the various languages which were suggested by my KDE mentors. Today, I would like to write about what new concepts and languages I have learnt. 

Qt and Qml

Qt and Qml are language for designing user interface–centric applications like different types of games. Qml is majorly used to design the front-end of an application whereas Qt is required for the back-end part.

My mentors at KDE introduced me to these exciting concepts. Qt and Qml were the main things that are required to complete my SoK project in Kanagram. 

When I got to reading about these languages the two best sources were relevant videos on youtube(qt viodrealm channel) and qtporjects.It was pretty obvious from the beginning that I was going to face some serious problems. Problems like not being able to implement the things that I was learning but my mentors at KDE came to my rescue and guided me through the process. 

At this point it is imperative that I write a few things about my mentor Jeremy, Debjit and the KDE community.

The mentors

Jeremy has been contributing to the community for the past few years. He is one of the major contributors and maintainers of Kanagram and many more. Although, I was very interested in what he did in real life but I could ask that question because that would be a severe violation of the "code of conduct"(whatever that is!). He provided me with useful sources like books and links to useful websites. He guided me through the entire process and it would have been truly impossible to do what I did without his help. I am sure that he must be a very busy man but the way he selflessly taught me from scratch was quite amazing. Thanks Jeremy! :)

I was also helped by the KDE community which is harbors some of the most talented people I have ever seen. One of these members is Debjit Mondol,the guy who had been appointed as my mentor for the SoK 2014. He is a major contributor of Kanagram. Incidentally Debjit is a student of Jeremy and since Jeremy was already mentoring few other guys he transferred some of his responsibilities to Debjit and I ended up being mentored by him.

My contributions

  • I have converted the anagram letters to clickable objects for better user interface.
  • The answer field was also made clickable so that the user need not type anything for better user interface. 

The future of Kanagram- Kanagram 15.04
After my SoK completion the Kanagram will be entirely transformed into a user interactive application where the users are given more flexibility. They will be able to click on the letters rather than typing and erasing again and again. 

Final thoughts

I have thoroughly enjoyed the SoK program and it has opened up new avenues for me. It introduced me to things which were previously unknown to me and helped me to enhance my knowledge. This is just the beginning of something which I hope would prove fruitful in the near future. I have a lot more to learn and I am looking forward to working with these amazing people and contributing more and more to this community. 
Categories: FLOSS Project Planets

Plasma 5.2 arrives to Fedora

Wed, 2015-01-28 16:01

It’s here! Plasma 5.2 has been released just yesterday and you don’t have to wait a single minute longer to update your beloved Fedora boxes :-)

I won’t go into detail here about all the new awesome things that are waiting for you in Plasma 5.2, but I totally recommend that you go and read Plasma 5.2: The Quintessential Breakdown by Ken Vermette while you are waiting for your package manager to wade through the update. You can also read the official Plasma 5.2 release announcement, it has fancy animated screenshots ;).

And there’s other news related to Plasma 5.2 and Fedora: Fedora rawhide has bee updated to Plasma 5.2 too. This means that KDE SIG will ship Plasma 5 in Fedora 22! Of course we will still maintain the Copr repository for our Fedora 20 and Fedora 21 users.

So, how to get Plasma 5.2 on Fedora?

On rawhide, just do dnf update. On Fedora 20 and Fedora 21, if you are already running Plasma 5.1.2 from dvratil/plasma-5 Copr, then all you need to do is to run dnf update. If you are running Plasma 5.1.95 (aka Plasma 5.2 beta) from dvratil/plasma-5-beta Copr, then it’s time to switch back to stable:

dnf copr disable dvratil/plasma-5-beta dnf copr enable dvratil/plasma-5 dnf update

If you are still running KDE 4 and you want to update to Plasma 5.2, just follow the instructions on dvratil/plasma-5 Copr page.

And if you don’t feel like installing Plasma 5 on your production box right away and would like to just try it out, there’s a live ISO for you. This time I did not forget to add Anaconda, so once you decide that Plasma 5 is good enough for you, you can just install it right from the ISO ;-)

EDIT: I might have included Anaconda, but did not add grub2 to the ISO, so the installer would fail anyway. This has been fixed and updated images are available now on the same link. If you are planning to install from the live ISO, please download the updated images (29-Jan-2015 00:42)

 

Oh, and if anyone is around in Brno next week for DevConf, let us know and we can informally meet for ceremonious consumption of beer to celebrate the Plasma release ;)

Categories: FLOSS Project Planets

Plasmoid Tutorial 1

Wed, 2015-01-28 15:36

With Plasma 5.2 out I wanted to update the tutorials on how to write a Plasmoid. Going through all of the steps from hello world, to using Plasma Components to configuration through to differing form factors.

It made sense to publish them as blog posts before I copy them to the wiki.

Behold, the first rather wordy blog post in a series of 7.

Intro

With Plasma5 we have embraced QML as our technology of choice. It is the only method of writing the UI for plasmoids.

Whilst Plasma4 offered a range of language, QML is the only way of interacting with QtQuick, the technology that powers the Plasma Shell. By using this we we get to provide a better developer experience as there is a wealth of existing QML resources. Some of the best links are:

Before you get started with writing plasmoids, it is recommended to read through the basics of these and have a bit of playing to get used to the language.

Writing plasmoids is effectively the same as writing any other QtQuick QML, with a few extensions:

  • We have a specific folder structure with metadata for our plasmashell to be able to load the files.
  • We provide a massive collection of libraries that extend the QtQuick library both with functionality and widgets that blend in with the Plasma theme.
  • Special API exists to interact with the shell, allowing you to save configurations or set default sizes.

In this series of tutorials we'll go through the steps of writing a real plasmoid from scratch, using some of the plasma libraries.

By the end we should have a completely working, deployable RSS reader.

Hello world

Initial Folder Structure

Plasmoids follow the simple KPackage structure. In the top-most folder there should be a file titled metadata.desktop and a single folder called "contents".

Inside the contents folder we place all our QML, assets and other additional files. We split the contents into subdirectories: config, data and ui to make things easier.

In our tutorial we will be making an RSS reader so everything is named appropriately to that.

The basic directory structure should be as follows:

myrssplasmoid/metadata.desktop
myrssplasmoid/contents/ui/main.qml

metadata.desktop [Desktop Entry] Name=RSS Plasmoid Comment=Shows RSS Feeds Encoding=UTF-8 Icon=applications-rss ServiceTypes=Plasma/Applet Type=Service X-KDE-PluginInfo-Author=David Edmundson X-KDE-PluginInfo-Email=davidedmundson@kde.org X-KDE-PluginInfo-Name=org.example.rssplasmoid X-KDE-PluginInfo-License=LGPL X-KDE-PluginInfo-Version=1.0 X-KDE-PluginInfo-Website=http://techbase.kde.org X-Plasma-API=declarativeappletscript X-Plasma-MainScript=ui/main.qml

Most of the fields here should be fairly self explanatory.
If in doubt, copy this and change the relevant fields.

main.qml import QtQuick 2.0 Item { Text { anchors.centerIn: parent text: "Hello World!"; } }

Providing you have followed the recommended reading this should be fairly self-explanatory, we have a text item in the middle of the plasmoid which will say "Hello World".

Over the next few tutorials this will get somewhat larger, we will also see some of the problems with this example; here translations aren't implemented and the text won't match the Plasma theme colour.

Installation

From the directory above run

plasmapkg2 --install myrssplasmoid

It should now be visible in the plasmashell in the "Add widgets" option along with every other plasmoid.

We can then it to our desktop or panel like any other installed plasmoid.

You will see that that the border and handles are automatically added, and that it is automatically sized to fit the implicit size of the text.

Next

Next tutorial, we will cover getting data from external sources.

Categories: FLOSS Project Planets

Plasma 5.2 Released

Wed, 2015-01-28 15:07

Packages for the release of Plasma 5.2 are available for Kubuntu Plasma5 14.10 and our development release. You can get them from the Kubuntu Next Backports PPA for 14.10, users of the development release will get it as a regular update.

The 14.10 packages include updates to Qt 5.4 which will remove any Unity packages.

Categories: FLOSS Project Planets

Disabling downloadable fonts

Wed, 2015-01-28 14:13

We have a nice new style for planet.kde.org. I think it is generally an improvement over what we had, but sadly it decides to force the oxygen font over my browser selected font.

If you're like me and can stand the oxygen font being forced over the font you chose on your configuration have a look at this article to see how to disable downloadable fonts.

Update: Unfortunately if you do that you'll lose the K-logo on the left because instead of an icon we're using a font to render it. So now I have to decide between unreadable (for me) oxygen font a having the broken K-logo on the top.

Categories: FLOSS Project Planets

Planet KDE Theme from Season of KDE

Wed, 2015-01-28 13:57

Season KDE is KDE’s annual project to give helpers a more structured way to take part in KDE.  It’s inspired by Summer of Code of course.

Today I had the pleasure of launching the new Planet KDE website theme done by Ranveer Aggarwal.  It looks very lovely and importantly makes the site a pleasure to browse on your phone.  Everyone hug him and do report any bugs to bugzilla.

.

by
Categories: FLOSS Project Planets

Marble experience in GCI-2014

Wed, 2015-01-28 12:49

Hello, guys!

Last post was about my Google Code-in experience in general. This time I want to tell you about my work with Marble.

First of all, what is Marble? Marble is a virtual globe application which allows the user to choose among the Earth, the Moon, Venus, Mars and other planets to display as a 3-D model. It is free software under the terms of the GNU LGPL, developed by KDE for use on personal computers  and smart phones.

(here is a screenshot of Marble)

So, what did I do for Marble during GCI-2014?

There were different kinds of tasks such as porting different widgets and plugins, I did two such tasks. The tasks I liked most were about creating my own game based on MarbleWidget.

(here is a screenshot of my Marble game)

One task, which I liked, too, was about creating my own historical map for Marble(in mercator projection). It is based on Baur map from 1870. 

(here is a screenshot of my map run in Marble)

And the last three tasks were about creating my tour, which can take you to places all over the world. You can watch one here:

Last things I want to say are: special thanks to Torsten Rahn, Abhinav Gangwar and Jonathan Riddell(not from Marble team). These people are the best mentors ever!

 Thanks for attention, use Qt and love KDE!

Categories: FLOSS Project Planets

KDE: SoK Project Final Update

Tue, 2015-01-27 19:17

KDE Jenkins DSL Job


This will be the last post on this subject with the SoK tag, as the program comes to a close this week. I will however be contining
my efforts past the program dates. With that said..
I have been busy! The last couple weeks I have worked out the job DSL from scratch in Java/Groovy for the job-dsl-plugin.
This of course entailed, dusting off the bits of programming I took in university and learning much more.
My DSL takes a json config file and reads the variables in and proceeds to generate a full fledged job in Jenkins. This
makes job creation much simpler!
Due to the complexity of the task, and the extra effort put in to getting windows and OSX builds to
actually build (beyond the scope of the task) Ben has been nice enough to extend my project beyond the initial sok dates.
I plan on continuing my work with this for as long as necessary, and will maintain it if they allow.
As a student, this opportunity has been invaluable and I encourage anyone in the future that wants that extra experience to
refine your skills to participate in KDE’s Season of KDE! You do not have to be a full time student to apply, I graduated years ago.
My new skillset includes:
Java/Groovy
Python
Jenkins
Docker
Building Qt5 + apps on 3 platforms (Linux, Windows, OSX)
KDE infrastructure

To say the least it has been a wild ride!

Categories: FLOSS Project Planets

Star-Hopper for KStars

Tue, 2015-01-27 13:57

Part of my project for Season of KDE was to make a UI for the existing Star-Hopper feature in KStars. I recently got to finishing it and decided to document it.

The Star-Hopper is an amazing feature present in KStars which allows you to find a path between two points in the sky. It is very commonly used in astronomy. If you have a bright star as a reference and you want to find an object in its vicinity, you start from your reference star and trace a route to the destination traversing a sequence of stars/pattern of stars.

The existing Starhopper backend present in KStars was showing the results on the terminal. So I worked on developing a frontend to display the results.

Here are steps on how to use the Star-Hopper feature in KStars :

1. Right click on your reference object on the Skymap to get a drop down menu. This object is your start point.

2. In the drop down menu, select “Starhop from here to” option.

3. A dotted line will appear. Move the mouse to your destination object and right click on it.

4. A dialog box requesting for FOV appears. If you have already selected FOV symbols. you will get a drop down menu to select within those FOV symbols. Otherwise you will be asked to enter the FOV in arcminutes.

5. Upon entering the FOV and clicking okay, you will either be presented with a dialog box containing list of objects in the Starhop route or an error dialog box if the Starhop route couldn’t be computed (mostly because of small FOV)

The dialog box has options to produce details of the object, center the object in the map and go to the next object and center it in the map. It also gives directions to the current selected object below the list of objects.

Hope this blog post helps in using the Star-Hopper features. Any questions, feel free to drop into #kde-kstars

Cheers! :)


Categories: FLOSS Project Planets

AppStream 0.8 released!

Tue, 2015-01-27 11:48

Yesterday I released version 0.8 of AppStream, the cross-distribution standard for software metadata, that is currently used by GNOME-Software, Muon and Apper in to display rich metadata about applications and other software components.

 What’s new?

The new release contains some tweaks on AppStreams documentation, and extends the specification with a few more tags and refinements. For example we now recommend sizes for screenshots. The recommended sizes are the ones GNOME-Software already uses today, and it is a good idea to ship those to make software-centers look great, as others SCs are planning to use them as well. Normal sizes as well as sizes for HiDPI displays are defined. This change affects only the distribution-generated data, the upstream metadata is unaffected by this (the distro-specific metadata generator will resize the screenshots anyway).

Another addition to the spec is the introduction of an optional <source_pkgname/> tag, which holds the source package name the packages defined in <pkgname/> tags are built from. This is mainly for internal use by the distributor, e.g. it can decide to use this information to link to internal resources (like bugtrackers, package-watch etc.). It may also be used by software-center applications as additional information to group software components.

Furthermore, we introduced a <bundle/> tag for future use with 3rd-party application installation solutions. The tag notifies a software-installer about the presence of a 3rd-party application bundle, and provides the necessary information on how to install it. In order to do that, the software-center needs to support the respective installation solution. Currently, the Limba project and Xdg-App bundles are supported. For software managers, it is a good idea to implement support for 3rd-party app installers, as soon as the solutions are ready. Currently, the projects are worked on heavily. The new tag is currently already used by Limba, which is the reason why it depends on the latest AppStream release.

How do I get it?

All AppStream libraries, libappstream, libappstream-qt and libappstream-glib, are supporting the 0.8 specification in their latest version – so in case you are using one of these, you don’t need to do anything. For Debian, the DEP-11 spec is being updated at time, and the changes will land in the DEP-11 tools soon.

Improve your metadata!

This call goes especilly to many KDE projects! Getting good data is partly a task for the distributor, since packaging issues can result in incorrect or broken data, screenshots need to be properly resized etc. However, flawed upstream data can also prevent software from being shown, since software with broken data or missing data will not be incorporated in the distro XML AppStream data file.

Richard Hughes of Fedora has created a nice overview of software failing to be included. You can see the failed-list here – the data can be filtered by desktop environment etc. For KDE projects, a Comment= field is often missing in their .desktop files (or a <summary/> tag needs to be added to their AppStream upstream XML file). Keep in mind that you are not only helping Fedora by fixing these issues, but also all other distributions cosuming the metadata you ship upstream.

For Debian, we will have a similar overview soon, since it is also a very helpful tool to find packaging issues.

If you want to get more information on how to improve your upstream metadata, and how new metadata should look like, take a look at the quickstart guide in the AppStream documentation.

Categories: FLOSS Project Planets

Plasma 5.2 for openSUSE? You bet!

Tue, 2015-01-27 10:27

The ever-amazing Plasma team from KDE just put out a new release of Plasma. I won’t spend much describing how big of an improvement it is – the release announcement at KDE has all the details needed to whet your appetite.

And of course, now it’s the turn of distributions to get out packages for the users at large.

This is also the case for openSUSE. The KDE:Frameworks5 repository hosts the new 5.2 goodness for released distributions (13.1 and 13.2) and Tumbleweed. Packages have also been submitted to Tumbleweed proper (pending legal review, so it will take some time).

Don’t forget the rule of thumb, in case you find problems: bugs in the packages should be directed towards the openSUSE bugzilla, while issues in the actual software should be reported to KDE. You can also discuss your experience on the KDE Community Forums.

Categories: FLOSS Project Planets

Why screen lockers on X11 cannot be secure

Tue, 2015-01-27 06:49

Today we released Plasma 5.2 and this new release comes with two fixes for security vulnerabilities in our screen locker implementation. As I found, exploited, reported and fixed these vulnerabilities I decided to put them a little bit into context.

The first vulnerability concerns our QtQuick user interface for the lock screen. Through the Look and Feel package it was possible to send the login information to a remote location. That’s pretty bad but luckily also only a theoretical problem: we have not yet implemented a way to install new Look and Feel packages from the Internet. So we found the issue before any harm was done.

The second vulnerability is more interesting as it is heavily related to the usage of X11 by the screen locker. To put this vulnerability into context I want to discuss screen lockers on X11 in general. In a previous post I explained that a screen locker has two primary tasks:

  1. Blocking input devices, so that an attacker cannot interact with the running session
  2. Blanking the screen to prevent private information being leaked

From the first requirement we can also derive a requirement that no application should get the input events except the lock screen and that the screen gets locked after a defined idle time. And from the second requirement we can derive that no application should have access to any screen content while the screen is being locked.

With these extended requirements we are already getting into areas where we cannot have a secure screen locker on X11. X11 is too old and too insecure to make it possible to fulfill the requirements. Why is that the case?

X11 on a protocol level doesn’t know anything of screen lockers. This means there is no privileged process which acts as the one and only screen locker. No, a screen locker is just an X11 client like any other (remote or local) X11 client connected to the same X server. This means the screen locker can only use the core functionality available to “emulate” screen locking. Also the X server doesn’t know that the screen is locked as it doesn’t understand the concept. If the screen locker can only use core functionality to emulate screen locking then any other client can do the same and prevent the screen locker from locking the screen, can’t it? And yes that is the case: opening a context menu on any window prevents the screen locker from activating.

That’s quite a bummer: any process connected to the X server can block the screen locker. Even more it could fake your screen locker. How hard would that be? Well I asked that question myself and needed about half an hour to implement an application which looks and behaves like the screen locker provided by Plasma 5. This is so trivial that I don’t see a point in not sharing the code:

#include <QGuiApplication> #include <QQuickView> #include <QQmlContext> #include <QScreen> #include <QStandardPaths> #include <QtQml> class Sessions : public QObject { Q_OBJECT Q_PROPERTY(bool startNewSessionSupported READ trueVal CONSTANT) Q_PROPERTY(bool switchUserSupported READ trueVal CONSTANT) public: explicit Sessions(QObject *parent = 0) : QObject(parent) {} bool trueVal() const { return true; } }; int main(int argc, char **argv) { QGuiApplication app(argc, argv); const QString file = QStandardPaths::locate(QStandardPaths::GenericDataLocation, QStringLiteral("plasma/look-and-feel/org.kde.breeze.desktop/contents/lockscreen/LockScreen.qml")); qmlRegisterType<Sessions>("org.kde.kscreenlocker", 1, 0, "Sessions"); QQuickView view; QQmlContext *c = view.engine()->rootContext(); c->setContextProperty(QStringLiteral("kscreenlocker_userName"), QStringLiteral("Martin Graesslin")); c->setContextProperty(QStringLiteral("kscreenlocker_userImage"), QImage()); view.setFlags(Qt::BypassWindowManagerHint); view.setResizeMode(QQuickView::SizeRootObjectToView); view.setSource(QUrl::fromLocalFile(file)); view.show(); view.setGeometry(view.screen()->geometry()); view.setKeyboardGrabEnabled(true); view.setMouseGrabEnabled(true); return app.exec(); } #include "main.moc"

This looks like and behaves like the real screen locker, but it isn’t. A user has no chance to recognize that this is not the real locker. Now if it’s that simple to replace the screen locker why should anyone go a complicated way to attack the lock screen? At least I wouldn’t.

And is there nothing which could be done to protect the real locker? Well obviously a good idea is to mark the one and only screen locker as authentic. But how to do that in a secure way on X11? We cannot e.g. show a specific user selected image. This would conflict with another problem with screen lockers on X11: it’s not possible to prevent that other windows grab screen content. So whatever the screen locker displays is also available to all other X11 clients. Also the window manager cannot help like preventing fullscreen windows to open fullscreen as can be seen in the code fragment above: it’s possible to bypass the window manager. Built in feature by X11.

Many of these issues could be considered as non-problematic using the old pragma of “if it runs, it’s trusted”. While I personally disagree, it just doesn’t hold for X11. If only clients of one user were connected to the X server one could say so. But X11 allows clients from other users and even remote clients. And this opens a complete new problem scope. Whenever you use ssh -X you open up your local system to remote attack vectors. If you don’t control the remote side it could mean that the client you start is modified in a way to prevent your screen from locking or to install a fake locker. I know that network transparency is a feature many users love, but it’s just a security night mare. Don’t use it!

Overall we see that attacking a screen locker or preventing that it opens up is really trivial on X11. That’s an inherent problem on the architecture and no implementation can solve them, no matter what the authors tell how secure it is. Compared to these basic attack vectors the vulnerability I found is rather obscure and it takes a considerable amount of understanding how X11 works.

Nevertheless we fixed the issue. And interestingly I chose to use the technology which will solve all those problems: Wayland. While we don’t use Wayland as the windowing system we use a custom domain-specific Wayland-based protocol for the communication between the parts our screen locker architecture. This uses the new libraries developed for later usage in kwin_wayland.

As we are speaking of Wayland: how will Wayland improve the situation? In the case of Plasma the screen locker daemon will be moved from ksmserver to kwin, so that the compositor has more control over it. Screen locking is a dedicated mode supported by the compositor. Whether a context menu is open or not doesn’t matter any more, the screen can be locked. Also the compositor controls input events. If it has the knowledge that the screen is locked, it can ensure that no input goes to the wrong client. Last but not least the compositor controls taking screenshots and thus can prevent that clients can grab the output of the lock screen.

Categories: FLOSS Project Planets

Richard 'RichiH' Hartmann: KDE battery monitor

Sun, 2015-01-25 15:44

Dear lazyweb,

using a ThinkPad X1 Carbon with Debian unstable and KDE 4.14.2, I have not had battery warnings for a few weeks, now.

The battery status can be read out via acpi -V as well as via the KDE widget. Hibernation via systemctl hibernate works as well.

What does not work is the warning when my battery is low, or automagic hibernation when shutting the lid or when the battery level is critical.

From what I gather, something in the communication between upower and KDE broke down, but I can't find what it is. I have also been told that Cinnamon is affected as well, so this seems to be a more general problem

Sadly, me and anyone else who's affected has been unable to fix this.

So, dear lazyweb, please help.

In loosely related news, this old status is still valid. UMTS is stable-ish now but even though I saved the SIM's PIN, KDE always displays a "SIM PIN unlock request" prompt after booting or hibernating. Once I enter that PIN, systemd tells me that a system policy prevents the change and wants my user password. If anyone knows how to get rid of that, I would also appreciate any pointers.

Categories: FLOSS Project Planets