FLOSS Project Planets

Code Drop: Getting Started Testing Drush Commands

Planet Drupal - 2 hours 27 min ago

I've recently been rewriting the Drush Migrate Manifest command for Drupal 8 and it was my first introduction to writing tests for Drush commands. The process was more painless than I had initially imagined it would be, but not without a few caveats along the way.

Background

Drush tests use PHPUnit as the test runner and testing framework which offers a kind familiarity to many developers, especially anyone involved in Drupal 8. Drush uses the name "Unish" for its testing namespace and base test classes. I’ve no idea where Unish comes from but here’s a definition?

Test Skeleton

Create your test within the tests folder using Lower Camel Case for the naming of both the file and the class name.

Categories: FLOSS Project Planets

Ian Wienand: rstdiary

Planet Debian - Thu, 2014-11-27 21:15

I find it very useful to spend 5 minutes a day to keep a small log of what was worked on, major bugs or reviews and a general small status report. It makes rolling up into a bigger status report easier when required, or handy as a reference before you go into meetings etc.

I was happily using an etherpad page until I couldn't save any more revisions and the page got too long and started giving javascript timeouts. For a replacement I wanted a single file as input with no boilerplate to aid in back-referencing and adding entries quickly. It should be formatted to be future-proof, as well as being emacs, makefile and git friendly. Output should be web-based so I can refer to it easily and point people at it when required, but it just has to be rsynced to public_html with zero setup.

rstdiary will take a flat RST based input file and chunk it into some reasonable looking static-HTML that looks something like this. It's split by month with some minimal navigation. Copy the output directory somewhere and it is done.

It might also serve as a small example of parsing and converting RST nodes where it does the chunking; unfortunately the official documentation on that is "to be completed" and I couldn't find anything like a canonical example, so I gathered what I could from looking at the source of the transformation stuff. As the license says, the software is provided "as is" without warranty!

So if you've been thinking "I should keep a short daily journal in a flat-file and publish it to a web-server but I can't find any software to do just that" you now have one less excuse.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-11-27

Planet Apache - Thu, 2014-11-27 18:58
  • Consul case study from Hootsuite

    Hootsuite used Consul for distributed configuration, specifically dark-launch feature flags, with great results: ‘Trying out bleeding edge software can be a risky proposition, but in the case of Consul, we’ve found it to be a solid system that works basically as described and was easy to get up and running. We managed to go from initial investigations to production within a month. The value was immediately obvious after looking into the key-value store combined with the events system and it’s DNS features and each of these has worked how we expected. Overall it has been fun to work with and has worked well and based on the initial work we have done with the Dark Launching system we’re feeling confident in Consul’s operation and are looking forward to expanding the scope of it’s use.’

    (tags: consul dark-launches feature-flags configuration distributed hootsuite notification)

Categories: FLOSS Project Planets

How to contribute as a non-developer and the KDE-CI meeting date is set

Planet KDE - Thu, 2014-11-27 18:05

First about the upcoming IRC meeting about KDE’s Continuous Integration (CI) system. The Doodle resulted in the 2nd of December us our meeting day. We’ll see you in #kde-devel at 20.00 (8pm) CET (UTC+1). See this notepad about the agenda and Co.

And now about the way you can contribute to KDE even though you can’t program:

  • Do you like to write thrilling articles about KDE and its software?
  • Do you like to interview people?
  • Are you an English native speaker and spot writing errors on first sight?
  • Would you like to take care of regular and repetitive jobs like e.g. the beta release announcements?
  • Do you know something about promo work and marketing?

Then we want you! Come to our mailing list or ping me on IRC in #kde-promo and tell us on what you’d like to work, what you’d like to improve and what your ideas are.

As a first task you can read the Promo and Dot page. As it’s a wiki and these pages might be outdated please fixed them and ask on the kde-promo mailing list if you’re not sure.

Categories: FLOSS Project Planets

Niels Thykier: Volume of debian-release mailing list

Planet Debian - Thu, 2014-11-27 16:38

Page 1 of 5

To be honest, I do not know how many mails it shows “per page” (though I assume it is a fixed number).  So for comparison, I found the month on debian-devel@l.d.o with the  highest volume in the past two years:  May 2013 with “Page 1 of 4“.

I hope you will please forgive us, if we are a bit terse in our replies or slow to reply.  We simply got a lot to deal with.  :)

 


Categories: FLOSS Project Planets

.VDMi/Blog: I went to Drupal 7.33 and all I got was a slow site

Planet Drupal - Thu, 2014-11-27 16:29
So, you just upgraded your site(s) to Drupal >= 7.33, everything seemed fine in your tests. You deployed the new release and after a while you notice that your site isn't as fast as before. It's actually slowing down everything on your server. You started Googling and ended up on this blogpost. Sounds like your story? Read on!

I spent the last 2 hours figuring this out, I decided it would be best to write it up right away while it's fresh in my memory. TLDR; at the bottom of this post.

We did an upgrade to Drupal 7.34 recently, we thought everything was fine. Release went over 3 different environments before deploying to live and no actual issues were found.

After deployment to live, we got some reports that the site was responding slow. We didn't directly link it to the upgrade to 7.34, I actually did a diff between 7.32 and 7.34 to see what changed after these reports and did not see anything suspicious that could cause this.

We had to investigate after a while as there was no sign of improvement, the CPU of the server was hitting 100% 24/7. New Relic monitoring told us about many calls to the "file_scan_directory" function of Drupal. When I actually logged the calls with the following snippet:

static $count;
if (!isset($count)) {
  $count = 0;
}
$count++;
drupal_debug($count . PHP_EOL);

The count actually went up to 700 for every request (It's quite a large project, plus the file_scan_directory is recursive).
When I printed the "debug_backtrace", I saw that this call was coming from "drupal_get_filename".
Looking at the function arguments, Drupal was looking for a imagecache_actions file, why?! And why on every request? Doesn't it cache these records in the registry?!

Yes it does! It appeared the imagecache_actions module had a typo in it (patch here):

module_load_include('inc', 'imagcache_actions', 'image_overlay.inc');

This should be:

module_load_include('inc', 'imagecache_actions', 'image_overlay.inc');

This would not have been an issue previously, 7.33 introduced a change though.
Pre 7.33:

$file = db_query("SELECT filename FROM {system} WHERE name = :name AND type = :type", array(':name' => $name, ':type' => $type))->fetchField();
if (file_exists(DRUPAL_ROOT . '/' . $file)) {
  $files[$type][$name] = $file;
}

7.33 or higher:

$file = db_query("SELECT filename FROM {system} WHERE name = :name AND type = :type", array(':name' => $name, ':type' => $type))->fetchField();
if ($file !== FALSE && file_exists(DRUPAL_ROOT . '/' . $file)) {
  $files[$type][$name] = $file;
}

Before 7.33, Drupal would try to find the record in the system table, it wouldn't find it and the $file would be NULL. The resulting string would be "DRUPAL_ROOT . '/' . $file", as $file is NULL, you can also see it as "DRUPAL_ROOT . '/'". Obviously the DRUPAL_ROOT exists, so it returns TRUE. It would put the record in the $files array and continue with what it was doing.

Because 7.33 and higher does a NULL-check on $file, it will not add any record to the $files array. This causes it to go into the file discovery routine:

if (!isset($files[$type][$name])) {
  // ... Some code
  $matches = drupal_system_listing("/^" . DRUPAL_PHP_FUNCTION_PATTERN . "\.$extension$/", $dir, 'name', 0);
  foreach ($matches as $matched_name => $file) {
    $files[$type][$matched_name] = $file->uri;
  }
}

This code will try to scan your Drupal installation for the given file. It will not find the file and continue eventually, but it will execute the file search in EVERY request that you execute the module_load_include.

While our issue was in the imagecache_actions module, your issue might be in any module (even custom) which does a wrong module_load_include.
It's very hard to find this out yourself. You can edit includes/bootstrap.inc on line 866 to write some info away to /tmp/drupal_debug.txt:
Add the following code after line 866:

else {
  drupal_debug('Missing file ' . $type . ' ' . $name . ' ' . DRUPAL_ROOT . '/' . $file . PHP_EOL);
}

TLDR; an issue in imagecache_actions combined with an upgrade to Drupal >= 7.33 killed the performance of our site. Patch for imagecache_actions here. Causing issue/patch here.

Categories: FLOSS Project Planets

Stefano Zacchiroli: CTTE nomination

Planet Debian - Thu, 2014-11-27 15:43

Apparently, enough fellow developers have been foolish enough to nominate me as a prospective member of the Debian Technical Committee (CTTE), that I've been approached to formally accept/decline the nomination. (Accepted nominees would then go through a selection process and possibly proposed to the DPL for nomination.)

I'm honored by the nominations and I thank the fellow developers that have thrown my name in the hat. But I've respectfully declined the nomination. Given my current involvement in an ongoing attempt to introduce a maximum term limit for CTTE membership, it would have been highly inappropriate for me to accept the nomination at this time.

I have no doubt that the current CTTE and the DPL will fill the empty seats with worthy project members.

Categories: FLOSS Project Planets

Blair Wadman: Eleven tips to keep Drupal up to date with security releases

Planet Drupal - Thu, 2014-11-27 15:36

Keeping your Drupal site up to date has always been of critical importance to ensure it remains secure. Last month's announcement of a SQL Injection vulnerability and subsequent announcement of automated attacks within 7 hours caused wide spread panic across much of the Drupal community.

Tags: DrupalPlanet Drupal
Categories: FLOSS Project Planets

Enable Contrib and Epel7 repository in your system

LinuxPlanet - Thu, 2014-11-27 15:01

If you have already installed or planning to install in the near future, you need to enable atleast 2 repository to get hold of more packages. Contrib and Epel repository.

It will give you more packages to be able to install from conary.

Read more at: Repository

Those will be enabled as default later, when installation notes and installation guides gets updated. So if you already installed conary with centos, then make sure to enable them.

 

Categories: FLOSS Project Planets

Andre Roberge: Practical Python and OpenCV: conclusion of the review

Planet Python - Thu, 2014-11-27 14:55
I own a lot of programming books and ebooks; with the exception of the Python Cookbook (1st and 2nd editions) and Code Complete, I don't think that I've come close to reading an entire book from cover to cover.  I write programs for fun, not for a living, and I almost never write reviews on my blog.  So why did I write one this time?

A while ago, I entered my email address to receive 10 emails meant as a crash course on OpenCV, using Python (of course), provided by Adrian Rosebrock.  The content of the emails and various posts they linked intrigued me. So, I decided to fork out some money and get the premium bundle which included an introductory book (reviewed in part 1), a Case Studies (partly reviewed in part 3), both of which include code samples and (as part of that package) free updates to future versions.  Included in the bundle was also a Ubuntu VirtualBox (reviewed in part 2) and a commitment by the author to respond quickly to emails - commitment that I have severely tested with no complaints.

As I mentioned, I program for fun, and I had fun going through the material covered in Practical Python and OpenCV.  I've also read through most of both books and tried a majority of the examples - something that is really rare for me.  On that basis alone, I thought it deserved a review.

Am I 100% satisfied with the Premium bundle I got with no idea about how it could be improved upon?  If you read the 3 previous parts, you know that the answer is no.  I have some slightly idiosynchratic tastes and tend to be blunt (I usually prefer to say"honest") in my assessments.

If I were 30 years younger, I might seriously consider getting into computer programming as a career and learn as much as I could about Machine Learning, Computer Vision and other such topics.  As a starting point, I would recommend to my younger self to go through the material covered in Practical Python and OpenCV, read the many interesting posts on Adrian Rosebrock's blog, as well as the Python tutorials on the OpenCV site itself.  I would probably recommend to my younger self to get just the Case Studies bundle (not including the Ubuntu VirtualBox): my younger self would have been too stubborn/self-reliant to feel like asking questions to the author and would have liked to install things on his computer in his own way.

My old self still feels the same way sometimes ...
Categories: FLOSS Project Planets

Kexi High-DPI Report Printouts

Planet KDE - Thu, 2014-11-27 14:53

Kexi currently generates report prinouts (and pdfs) using the screen resolution.
This is fine for general text, but if you have a database full of images it is less than ideal.

I think this looks better:

I'll keep comments open for a few days, but I get a lot of spam so generally theyre disabled on my blog until I can stop the spam, neither Captcha or Mollom have worked out, so comments to adam_at_piggz_dot_co_dot_uk after that!

Categories: FLOSS Project Planets

Andy Wingo: scheme workshop 2014

GNU Planet! - Thu, 2014-11-27 12:48

I just got back from the US, and after sleeping for 14 hours straight I'm in a position to type about stuff again. So welcome back to the solipsism, France and internet! It is good to see you on a properly-sized monitor again.

I had the enormously pleasurable and flattering experience of being invited to keynote this year's Scheme Workshop last week in DC. Thanks to John Clements, Jason Hemann, and the rest of the committee for making it a lovely experience.

My talk was on what Scheme can learn from JavaScript, informed by my work in JS implementations over the past few years; you can download the slides as a PDF. I managed to record audio, so here goes nothing:


55 minutes, vorbis or mp3

It helps to follow along with the slides. Some day I'll augment my slide-rendering stuff to synchronize a sequence of SVGs with audio, but not today :)

The invitation to speak meant a lot to me, for complicated reasons. See, Scheme was born out of academic research labs, and to a large extent that's been its spiritual core for the last 40 years. My way to the temple was as a money-changer, though. While working as a teacher in northern Namibia in the early 2000s, fleeing my degree in nuclear engineering, trying to figure out some future life for myself, for some reason I was recording all of my expenses in Gnucash. Like, all of them, petty cash and all. 50 cents for a fat-cake, that kind of thing.

I got to thinking "you know, I bet I don't spend any money on Tuesdays." See, there was nothing really to spend money on in the village besides fat cakes and boiled eggs, and I didn't go into town to buy things except on weekends or later in the week. So I thought that it would be neat to represent that as a chart. Gnucash didn't have such a chart but I knew that they were implemented in Guile, as part of this wave of Scheme consciousness that swept the GNU project in the nineties, and that I should in theory be able to write it myself.

Problem was, I also didn't have internet in the village, at least then, and I didn't know Scheme and I didn't really know Gnucash. I think what I ended up doing was just monkey-typing out something that looked like the rest of the code, getting terrible errors but hey, it eventually worked. I submitted the code, many years ago now, some of the worst code you'll read today, but they did end up incorporating it into Gnucash and to my knowledge that report is still there.

I got more into programming, but still through the back door, so to speak. I had done some free software work before going to Namibia, on GStreamer, and wanted to build a programmable modular synthesizer with it. I read about Supercollider, and decided I wanted to do something like that but with the "unit generators" defined in GStreamer and orchestrated with Scheme. If I knew then that Scheme could be fast, I probably would have started on an entirely different course of things, but that did at least result in gainful employment doing unrelated GStreamer things, if not a synthesizer.

Scheme became my dominant language for writing programs. It was fun, and the need to re-implement a bunch of things wasn't a barrier at all -- rather a fun challenge. After a while, though, speed was becoming a problem. It became apparent that the only way to speed up Guile would be to replace its AST interpreter with a compiler. Thing is, I didn't know how to write one! Fortunately there was previous work by Keisuke Nishida, jetsam from the nineties wave of Scheme consciousness. I read and read that code, mechanically monkey-typed it into compilation, and slowly reworked it into Guile itself. In the end, around 2009, Guile was faster and I found myself its co-maintainer to boot.

Scheme has been a back door for me for work, too. I randomly met Kwindla Hultman-Kramer in Namibia, and we found Scheme to be a common interest. Some four or five years later I ended up working for him with the great folks at Oblong. As my interest in compilers grew, and it grew as I learned more about Scheme, I wanted something closer there, and that's what I've been doing in Igalia for the last few years. My first contact there was a former Common Lisp person, and since then many contacts I've had in the JS implementation world have been former Schemers.

So it was a delight when the invitation came to speak (keynote, no less!) the Scheme Workshop, behind the altar instead of in the foyer.

I think it's clear by now that Scheme as a language and a community isn't moving as fast now as it was in 2000 or even 2005. That's good because it reflects a certain maturity, and makes the lore of the tribe easier to digest, but bad in that people tend to ossify and focus on past achievements rather than future possibility. Ehud Lamm quoted Nietzche earlier today on Twitter:

By searching out origins, one becomes a crab. The historian looks backward; eventually he also believes backward.

So it is with Scheme and Schemers, to an extent. I hope my talk at the conference inspires some young Schemer to make an adaptively optimized Scheme, or to solve the self-hosted adaptive optimization problem. Anyway, as users I think we should end the era of contorting our code to please compilers. Of course some discretion in this area is always necessary but there's little excuse for actively bad code.

Happy hacking with Scheme, and au revoir!

Categories: FLOSS Project Planets

Appnovation Technologies: Searching and attaching images to content

Planet Drupal - Thu, 2014-11-27 12:00

Because of its ability to extend the core platform, Drupal has become a popular CMS/Framework for many large media and publishing companies.

var switchTo5x = false;stLight.options({"publisher":"dr-75626d0b-d9b4-2fdb-6d29-1a20f61d683"});
Categories: FLOSS Project Planets

Nick Kew: ApacheCon 2014

Planet Apache - Thu, 2014-11-27 11:48

I’ve already posted from ApacheCon about my favourable first impression.  I’m happy to say my comments about the fantastic city and hotel have survived the week intact: I was as impressed at the end of the week as at the start.  Even the weather improved through the week, so in the second half – when the conference schedule was less intense – I could go out without getting wet.

The main conference sessions were Monday to Wednesday, with all-day schedules and social events in the evening.  Thursday was all-day BarCamp, though I skipped the morning in favour of a bit of touristing in the best weather of the week.  Thursday and Friday were also the related Cloudstack event.  I’m not going to give a detailed account of my week.  I attended a mix of talks: a couple on familiar subjects to support and heckle speakers, new and unfamiliar material to educate myself on topics of interest, and – not least – inspirational talks from Apache’s gurus such as Bertrand.

Socially it had a very good feel: as ever I’ve renewed acquaintance with old friends, met new friends, and put faces to names hitherto seen only online.  The social scene was no doubt helped not just by the three social evenings laid on, but also by the fact that all meals were provided encouraging us to stay around the hotel, and that the weather discouraged going elsewhere for the first half of the week.  The one thing missing was a keysigning party.  Note to self: organise it myself for future conferences if noone else gets there first!

I’ve returned home much refreshed and with some ideas relevant to my work, and an intention to revitalise my Apache work – where I need to cut my involvement down to my three core projects and then give those the time&effort they deserve but which have been sadly lacking of late.  Also grossly overfed and bloated.  Now I just have to sustain that high, against the adversity of the darkest time of year and temperatures that encourage staying in bed. :o

Huge thanks to DrBacchus and the team for making it all happen!


Categories: FLOSS Project Planets

Oliver Davies: Include environment-specific settings files on Pantheon

Planet Drupal - Thu, 2014-11-27 11:24

I was recently doing some work on a site hosted on Pantheon and came across an issue, for which part of the suggested fix was to ensure that the $base_url variable was explicitly defined within settings.php (this is also best practice on all Drupal sites).

The way that was recommended was by using a switch() function based on Pantheon's environment variable. For example:

Tags:
Categories: FLOSS Project Planets

Wes Mason: Verifying post-deploy connections with conn-check

Planet Python - Thu, 2014-11-27 10:41
The Problem Deployments of a service have a number of different network dependencies that require verification:
  • Connections between services (e.g. app -> postgresql, are ports unblocked at the firewall(s)? If talking to a data centre instance do we have a route?)
  • External services (e.g. webservices such as S3)
  • Verification that the services on the other end are real (you're actually talking to MongoDB or rabbitmq via AMQP, not just another TCP service running on those ports)
  • Verification of authentication
Although many of these can be solved with smoke tests, it's not always immediately obvious that there is a problem, or what the problem is. Our solution conn-check is a tool that started life inside Ubuntu One to verify connections between services and to S3 etc. post-deploy. During a mini-sprint in Uruguay a few months ago we separated conn-check into it's own own package and open sourced it. Since then we've been improving it and using it to verify deploys on our services (such as login.ubuntu.com). conn-check takes a simple YAML config defining a list of checks to perform (udp, tcp, tls, http, amqp, postgres, mongodb, redis, memcache), and performs those checks asynchronously using Twisted's thread pool, and outputs the results in a Nagios check standard output, so conn-check can be run regulary as a Nagios check to continually verify network status between services (and alert on change, e.g. out of band firewall changes). Automatically generating configs We have also released a separate package called conn-check-configs which provides tools for automatically generating conn-check YAML configs from a source such as a Django settings module or Juju environment status (we're currently using the Django settings export, with Juju env export being tested in a branch). Getting conn-check You can get conn-check by:
  • Installing from PyPI: pip install conn-check (You can sudo this to get it system installed, but personally I'd put it in a virtualenv or use pipsi.)
  • Installing with apt-get from my PPA:
    sudo add-apt-repository ppa:wesmason/conn-check sudo apt-get update sudo apt-get install python-conn-check
  • If you use Juju to manage your infrastructure/deployments, then you can use our charm to deploy conn-check for your service (and even add Nagios checks automatically via the nrpe relation).
Categories: FLOSS Project Planets

PyCharm: Announcing the PyCharm 4.0.1 release update

Planet Python - Thu, 2014-11-27 10:15

Just one week after the PyCharm 4 release, we are eager to announce that the PyCharm 4.0.1 bug-fix update has been uploaded and is now available from the download page. It also will be available shortly as a patch update from within the IDE (from PyCharm 4.0 and 4.0.1 RC only).

As a recap, some notable highlights of this release include: a fix for a rare but severe bug that causes an infinite indexing, a fix for a settings import bug, code completion and inspections fixes, a fix for matplotlib support, a fix for remote interpreters support and some Django support fixes.

For further details on the bug fixes and changes, please consult the Release Notes.

As usual, please report any problem you found in the issue tracker.
If you would like to discuss your experiences with PyCharm, we look forward to your feedback on our public PyCharm forum and twitter.

Develop with Pleasure!
-PyCharm team

Categories: FLOSS Project Planets

Piergiorgio Lucidi: Chance to Win Free Copies of Learning Alfresco Web Scripts

Planet Apache - Thu, 2014-11-27 10:14

I have teamed up with Packt Publishing to giveaway few copies of the most recent book I have made contributions to. The book is now out and available for purchase!!!

Categories: FLOSS Project Planets

Workshop at CERN

Planet KDE - Thu, 2014-11-27 10:00

Last week, Thomas, Christian and myself were attending a workshop in CERN, the European Organization for Nuclear Research in Geneve, Switzerland.

CERN is a very inspiring place, attracting intelligent people from all over the world to get behind the secrets of our being. I felt honored to be at the place where for example the world wide web was invented.

The event was called Workshop on Cloud Services for File Synchronisation and Sharing and was hosted by CERN IT department. There have been around 100 attendees.

I was giving a talk called The File Sync Algorithm of the ownCloud Desktop Clients, which was very well received. If you happen to be interested in the sync algorithm we’re using, the slides are a nice starting point.

What amazed me most was the great atmosphere and the very positive attitude towards ownCloud. Many representatives of edu organizations that use ownCloud to which I talked were very happy with the product (even though there are problems here and there) from the technical POV. A lot of interesting setups and environments were explained and also showcased ownCloud’s flexibility to integrate into existing structures.

What also was pointed out by the attendees of the workshop was the importance of the fact that ownCloud is open source. Non free software does not have a chance at all in that market. That was the very clear statement in the final discussion session of the workshop.

The keynote was given by Prof. Benjamin Pierce from Pennsylvania with the title Principles of Synchronization. He is the lead author of
the project Unison which is another opensource sync project. It’s sync engine marks very high quality, but is not “up-to-date software” any more as he said.

I had the pleasure to spend quite some time with him to discuss syncing in general and our sync algorithms in particular, amongst other interesting things.

Atlas Detectors

As part of his work, he works with a tool called QuickCheck to do very enhanced testing. One night we were sitting in the cantina there hacking to adopt the testing to ownCloud client and server. The first results were very promising, for example we revealed a “problem” in our sync core that I knew of, which formally is a sync error, yet very very unlikely to happen and thus accepted for the sake of an easier algorithm. It was impressive how fast the testing method was identifying that problem.
I like to follow up with the testing method.

Furthermore we met with a whole variety of other interesting people, backend developers, operators of the huge datasets (100 Peta-Byte), the director of CERN IT, a maintainer of the Scientific Linux and others.

Also we had the chance to visit the Atlas experiment, it is 100 meter underneath the surface and huge. That is where the particles are accelerated, and it was great to have the chance to visit that.

The trip was a great experience and very motivating for me, and I think it should be for all of us all doing ownCloud. Frank was really hitting a nerv when he was seeding the idea, and we all were doing a nice product of it so far.

Lets do more of this cool stuff!


Categories: FLOSS Project Planets

Encrypt Everything: Encrypt data using GPG and save passwords

LinuxPlanet - Thu, 2014-11-27 10:00
Data security is an important concern these days and encryption is a very powerful tool to secure the data. In my previous post I talked about how to encrypt a disk. Now we are going to talk about how to encrypt files using GNU Privacy Guard (GPG).

GPG uses public key cryptography. This means that instead of having one key to encrypt and decrypt, there are two keys. One of these keys can be publicly shared and hence is known as public key. The other key is to be kept secret and is known as private key. Anything encrypted with public key can only be decrypted with private key.
How to encrypt files?Assuming a scenario that user "test" wants to send an encrypted file to me, the user just has to find my public key, encrypt the data and send it to me where I will be able to decrypt the file using my private key and obtain the data. Note that user "test" doesn't need to have GPG keys generated in order to encrypt and send data to me.
Step1: Let us create a text file which we'll encypt:test$ echo "This is a secret message." > secret.txt
Step2: User "test" needs to find my keys. There are many public servers where one can share their public key in case someone else wants to encrypt the data. One such server is run by MIT at http://pgp.mit.edu.test$ gpg --keyserver pgp.mit.edu --search-keys aditya@adityapatawari.com
Step3: Once the user obtains my public key, then encrypting data is really easy.test$ gpg --output secret.txt.gpg --encrypt --recipient aditya@adityapatawari.com secret.txt
The command above will create an encrypted file named secret.txt.gpg which can be shared via email or any other means. Once I get the encrypted file, I can decrypt it using my private keyaditya$ gpg --output secret.txt --decrypt secret.txt.gpg
How to create GPG keys to receive data?Now assume a scenario where "test" user wants to create a set of GPG keys in order to share the public key and receive encrypted data.
Step1: Generate a key pair. The command will present you some options (stick to defaults if you are not sure) and ask for some data like your name and email address etc.test$ gpg --gen-key
Step2: Check the keys.
test$ gpg --list-secret-keys/home/test/.gnupg/secring.gpg-----------------------------sec   2048R/E46749BB 2014-11-23uid                  Aditya TestKeys (This is not a valid key) <adimania+test@gmail.com>ssb   2048R/C5E57FF2 2014-11-23

Step3: Upload the key to a public server using the id from the above.test$ gpg --keyserver pgp.mit.edu --send-key E46749BB
Now others can search for the key, use it to encrypt the data and send it to the "test" user. 
To use GPG for saving password, have a look at pass utility. It uses GPG to encrypt passwords and other data and store it in a hierarchical format. 
Categories: FLOSS Project Planets
Syndicate content