LinuxPlanet

Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 1 day 9 hours ago

The difference between an ‘akmod’ and ‘kmod’

Wed, 2014-05-21 14:03

 

A ‘kmod’ (kernel driver module) is the pre-compiled, low-level software interface between the kernel and a driver. It gets loaded (into RAM) and merged into the running kernel. Linux kmods are specific to one and only one kernel, and will not work (nor even load) for any other kernel.

Advantages: Pre-Compiled – no need to fool around with compiling, compilers, *-devel packages and other associated overhead.

Disadvantages: updating and re-booting into a new kernel without updating the kmod(s) will result in loss of functionality and inherent delays in updating kmods after kernel updates.

akmods (similar to dkms) is a solution to the problem of some kernel modules depending on specific versions of a kernel. As you start your computer, the akmod system will check if there are any missing kmods and if so, rebuild a new kmod for you. Akmods have more overhead than regular kmod packages as they require a few development tools such as gcc and automake in order to be able to build new kmods locally. If you think you’d like to try akmods, simply replace kmod with akmod

With akmod you don’t have to worry about kernel updates as it recreates the driver for the new kernel on boot. With kmod you have to wait until a matching kmod is available before installing the kernel update.

Advantages: obvious.

Disadvantages: HDD space required for compilers and *-devel packages; unforseen/uncorrectable driver problems that cannot be resolved by the automatic tools.

Categories: FLOSS Project Planets

THE SADDEST SIGHT IN THE WORLD

Tue, 2014-05-20 04:52
Lostnbronx breaks a bottle of beer, and contemplates one's duty to oneself.
Categories: FLOSS Project Planets

Anyone interested to get updates for FL:2-devel ?

Mon, 2014-05-19 10:44

Hello all Foresight users.

I wonder if anyone is interested to still get some updates to current fl:2-devel repo? If so, leave a tiny comment and will put in some time to update some regular packages in near future.

As we all waiting for F20, so we are not sure how many users are left with latest Foresight today…..

Categories: FLOSS Project Planets

Using Saltstack to update all hosts, but not at the same time

Mon, 2014-05-19 00:20

Configuration management and automation tools like SaltStack are great, they allow us to deploy a configuration change to thousands of servers with out much effort. However, while these tools are powerful and give us greater control of our environment they can also be dangerous. Since you can roll out a configuration change to all of your servers at once, it is easy for that change to break all of your servers at once.

In today's article I am going to show a few ways you can run a SaltStack "highstate" across your environment, and how you can make those highstate changes a little safer by staggering when servers get updated.

Why stagger highstate runs

Let's imagine for a second that we are running a cluster of 100 webservers. For this webserver cluster we are using the following nginx state file to maintain our configuration, ensure that the nginx package is installed and the service is running.

nginx: pkg: - installed service: - running - watch: - pkg: nginx - file: /etc/nginx/nginx.conf - file: /etc/nginx/conf.d - file: /etc/nginx/globals /etc/nginx/globals: file.recurse: - source: salt://nginx/config/etc/nginx/globals - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True /etc/nginx/nginx.conf: file.managed: - source: salt://nginx/config/etc/nginx/nginx.conf - user: root - group: root - mode: 640 /etc/nginx/conf.d: file.recurse: - source: salt://nginx/config/etc/nginx/conf.d - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True

Now let's say you need to deploy a change to the nginx.conf configuration file. Making the change is pretty straight forward, we can simply change the source file on the master server and use salt to deploy it. Since we listed the nginx.conf file as a watched state, SaltStack will also restart the nginx service for us after changing the config file.

To deploy this change to all of our servers we can run a highstate from the master that targets every server.

# salt '*' state.highstate

One of SaltStack's strengths is the fact that it performs tasks in parallel across many minions. While that is a useful feature for performance, it can be a bit of problem when running a highstate that restarts services across all of your minions.

The above command will deploy the configuration file to each server and restart nginx on all servers. Effectively bringing down nginx on all hosts at the same time, even if it is for just a second that restart is probably going to be noticed by your end users.

To avoid situations that bring down a service across all of our hosts at the same time, we can stagger when hosts are updated.

Staggering highstates Ad-hoc highstates from the master

Initiating highstates is usually either performed ad-hoc or via a scheduled task. There are two ways to initiate an ad-hoc highstate, either via the salt-call command on the minion or the salt command on the master. Running the salt-call command on each minion manually naturally avoids the possibility of restarting services on all minions at the same time as it only affects the minion where it is run from. The salt command on the master however can if given the proper targets be told to update all hosts, or only a subset of hosts at a given time.

The most common method of calling a highstate is the following command.

# salt '*' state.highstate

Since the above command runs the highstate on all hosts in parallel this will not work for staggering the update. The below examples will cover how to use the salt command in conjunction with SaltStack features and minion organization practices that allow us to stagger highstate changes.

Batch Mode

When initiating a highstate from the master you can utilize a feature known as batch mode. The --batch-size flag allows you to specify how many minions to run against in parallel. For example, if we have 10 hosts and we want to run a highstate on all 10 but only 5 at a time. We can use the command below.

# salt --batch-size 5 '*' state.highstate

The batch size can also be specified with the -b flag. We could perform the same task with the next command.

# salt -b 5 '*' state.highstate

The above commands will tell salt to pick 5 hosts, run a highstate across those hosts and wait for them to finish before performing the same task on the next 5 hosts until it has run a highstate across all hosts connected to the master.

Specifying a percentage in batch mode

Batch size can take either a number or a percentage. Given the same scenario, if we have 10 hosts and we want to run a highstate on 5 at a time. Rather than giving the batch size of 5 we can give a batch size of 50%.

# salt -b 50% '*' state.highstate Using unique identifiers like grains, nodegroups, pillars and hostnames

Batch mode picks which hosts to update at random, you may yourself wanting to upgrade a specific set of minions first. Within SaltStack there are several options for identifying a specific minion, with some pre-planning on the organization of our minions we can use these identifiers to target specific hosts and control when/how they get updated.

Hostname Conventions

The most basic way to target a server in SaltStack is via the hostname. Choosing a good hostname naming convention is important in general but when you tie in configuration management tools like SaltStack it helps out even more (see this blog post for an example).

Let's give another example where we have 100 hosts, and we want to split our hosts into 4 groups; group1, group2, group3 and group4. Our hostname will follow the convention of webhost<hostnum>.<group>.example.com so the first host in group 1 would be webhost01.group1.example.com.

Now that we have a good naming convention if we want to roll-out our nginx configuration change and restart to these groups one by one we can do so with the following salt command.

# salt 'webhost*group1*' state.highstate

This command will only run a highstate against hosts that have a hostname that matches the 'webhost*group1*' pattern. Which means that only group1's hosts are going to be updated with this run of salt.

Nodegroups

Sometimes you may find yourself in a situation where you cannot use the hostname to identify classes of minions and the hostnames can't easily be changed, for whatever reasons. If descriptive hostnames are not an option than one alternate solution for this is to use nodegroups. Nodegroups are an internal grouping system within SaltStack that will let you target groups of minions by a specified name.

In the example below we are going to create 2 nodegroups for a cluster of 6 webservers.

Defining a nodegroup

On the master server we will define 2 nodegroups, group1 and group2. To add these definitions we will need to change the /etc/salt/master configuration file on the master server.

# vi /etc/salt/master

Find:

##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # # group1: 'L@foo.domain.com,bar.domain.com,baz.domain.com and bl*.domain.com' # group2: 'G@os:Debian and foo.domain.com'

Modify To:

##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # group1: 'L@webhost01.example.com,webhost02.example.com and webhost03.example.com' group2: 'L@webhost04.example.com,webhost05.example.com and webhost06.example.com'

After modifying the /etc/salt/master we will need to restart the salt-master service

# /etc/init.d/salt-master restart Targeting hosts with nodegroups

With our nodegroups defined we can now target our groups of minions by passing the -N <groupname> arguments to the salt command.

# salt -N group1 state.highstate

The above command will only run the highstate on minions within the group1 nodegroup.

Grains

Defining unique grains is another way of grouping minions. Grains are kind of like static variables for minions in SaltStack; by default grains will contain information such as network configuration, hostnames, device information and OS version. They are set on the minions during start time and they do not change, this makes them a great candidate to use to identify groups of minions.

To use grains to segregate hosts we must first create a grain that will have different values for each group of hosts. To do this we will create a grain called group the value of this grain will be either group1 or group2. If we have 10 hosts, 5 of those hosts will be given a value of group1 and the other 5 will be given a value of group2.

There are a couple of ways to set grains, we can do it either by editing the /etc/salt/minion configuration file or the /etc/salt/grains file on the minion servers. I personally like putting grains into the /etc/salt/grains file and that's what I will be showing in this example.

Setting grains

To set our group grain we will edit the /etc/salt/grains file.

# vi /etc/salt/grains

Append:

group: group1

Since grains are only set during start of the minion service we will need to restart the salt-minion service.

# /etc/init.d/salt-minion restart Targeting hosts with grains

Now that our grain is set we can target our groups using the -G flag of the salt command.

# salt -G group:group2 state.highstate

The above command will only run the highstate function on minions where the grain group is set to group2

Using batch-size and unique identifiers together

At some point, after creating nodegroups and grouping grains you may find that you still want to deploy changes to only a percentage of those minions.

Luckily we can use --batch-size and nodegroup and grain targeting together. Let's say you have 100 webservers, and you split your webservers across 4 nodegroups. If you spread out the hosts evenly each nodegroup would have 25 hosts within it, but this time restarting all 25 hosts is not what you want. Rather you would prefer to only restart 5 hosts at a time, you can do this with batch size and nodegroups.

The command for our example above would look like the following.

# salt -b 5 -N group1 state.highstate

This command will update the group1 nodegroup, 5 minions at a time.

Scheduling updates

The above examples are great for ad-hoc highstates across your minion population, however that only fixes highstates being pushed manually. By scheduling highstate runs, we can make sure that hosts get the proper configuration automatically without any human interaction, but again we have to be careful with how we schedule these updates. If we simple told each minion to update every 5 minutes, those updates would surely overlap at some point.

Using Scheduler to schedule updates

The SaltStack scheduler system is a great tool for scheduling salt tasks; especially the highstate function. You can configure scheduler in SaltStack two ways, by appending the configuration to the /etc/salt/minion configuration file on each minion or by setting the schedule configuration as a pillar for each minion.

Setting the configuration as a pillar is by far the easiest, however the version of SaltStack I am using 0.16 has a bug where setting the scheduler configuration in the pillar does not work. So the example I am going to show is the first method. We will be appending the configuration to the /etc/salt/minion configuration file, we are also going to use SaltStack to deploy this file as we might as well tell SaltStack how to manage itself.

Creating the state file

Before adding the schedule we will need to create the state file to manage the minion config file.

Create a saltminion directory

We will first create a directory called saltminion in /srv/salt which is the default directory for salt states.

# mkdir -p /srv/salt/saltminion Create the SLS

After creating the saltminion directory we can create the state file for managing the /etc/salt/minion configuration file. By naming the file init.sls we can reference this state as saltminion in the top.sls file.

# vi /srv/salt/saltminion/init.sls

Insert:

salt-minion: service: - running - enable: True - watch: - file: /etc/salt/minion /etc/salt/minion: file.managed: - source: salt://saltminion/minion - user: root - group: root - mode: 640 - template: jinja - context: saltmaster: master.example.com {% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above state file might look a bit daunting but it is pretty easy, the first section ensures that the salt-minion service is running and enabled. It also watched the /etc/salt/minion config file and if it changes than salt will restart the service. The second section is where things get a bit more complicated. The second section manages the /etc/salt/minion configuration file, most of this is standard salt stack configuration management. However, you may have noticed a part that looks a bit different.

{% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above is an example of using jinja inside of a state file. You can use the jinja templating in SaltStack to create complicated statements. The above will check if the grain "group" is set to group1, if it is set then it will add set the timer context to 20. If it is not set than it will default to a context of 15.

Create a template minion file

In the above salt state we told SaltStack that the salt://saltminion/minion file is a template, and that template file is a jinja template. This tells SaltStack to read the minion file and use the jinja templating language to parse it. The items under context are variables being passed to jinja while processing the file.

At this point it would probably be a good idea to actually create the template file, to do this we will start with a copy from the master server.

# cp /etc/salt/minion /srv/salt/saltminion/

Once we copy the file into the saltminion directory we will need to add the appropriate jinja markup.

# vi /srv/salt/saltminion/minion

First we will add the saltmaster variable, which will be used to tell the minions which master to connect to. In our case this will be replaced with master.example.com.

Find:

#master: salt

Replace with:

master: {{ saltmaster }}

After adding the master configuration, we can add the scheduler configuration to the same file. We will add the following to the bottom of the minion configuration file.

Append:

schedule: highstate: function: state.highstate minutes: {{ timer }}

In the scheduler configuration the timer variable will be replaced with either 15 or 20 depending on the group grain that is set on the minion. This will tell the minion to run a highstate every 15 or 20 minute, that should give approximately 5 minutes between groups. The timing of this may need adjustment depending on the environment. When dealing with large amounts of servers you may need to build in a larger time between highstates between the groups.

Deploying the minion config

Now that we have created the minion template file, we will need to deploy it to all of the minions. Since they don't already automatically update we can run an ad-hoc highstate from the master. Because we are restarting the minion service we may want to use --batch-size to stagger the updates.

# salt -b 10% '*' state.highstate

The above command will update all minions but only 10% of them at a time.

Using cron on the minions to schedule updates

An alternative to using SaltStacks scheduler is cron, the cron service was the default answer for scheduling highstates before the scheduler system was added into SaltStack. Since we are deploying a configuration to the minions to manage highstates, we can use salt to automate and managed this.

Creating the state file

Like with the scheduler option we will create a saltminion directory within the /srv/salt directory.

# mkdir -p /srv/salt/saltminion Create the SLS file

There are a few ways you can create crontabs in salt, but I personally like just putting a file in /etc/cron.d as it makes the management of the crontab as simple as managing any other file in salt. The below SLS file will deploy a templated file /etc/cron.d/salt-highstate to all of the minions.

# vi /srv/salt/saltminion/init.sls

Insert:

/etc/cron.d/salt-highstate: file.managed: - source: salt://saltminion/salt-highstate - user: root - group: root - mode: 640 - template: jinja - context: updategroup: {{ grains['group'] }} Create the cron template

Again we are using template files and jinja to determine which crontab entry should be used. We are however performing this a little differently. Rather than putting the logic into the state file, we are putting the logic in the source file salt://saltminion/salt-highstate and simply passing the grains['group'] value to the template file in the state configuration.

# vi /srv/salt/saltminion/salt-highstate

Insert:

{{ if "group1" in grains['group'] }} */20 * * * * root /usr/bin/salt-call state.highstate {{ else }} */15 * * * * root /usr/bin/salt-call state.highstate {{ endif }}

One advantage of cron over salt's scheduler is that you have a bit more control of when the highstate runs. The scheduler system runs over an interval with the ability to define seconds, minutes, hours or days. Whereas cron gives you that same ability but also allows you to define complex schedules like, "only run every Sunday if it is the 15th day of the month". While that may be a bit overkill for most, some may find that the flexibility of cron makes it easier to avoid both groups updating at the same time.

Using cron on the master to schedule updates with batches

If you want to run your highstates more frequently and avoid conditions where everything gets updated at the same time. Rather than scheduling updates from the minions, one could schedule the update from the salt master. By using cron on the master, we can use the same ad-hoc salt commands as above but call them on a scheduled basis. This solution is somewhat a best of both worlds scenario. It gives you an easy way of automatically updating your hosts in different batches and it allows you to roll the update to those groups a little at a time.

To do this we can create a simple job in cron, for consistency I am going to use /etc/cron.d but this could be done via the crontab command as well.

# vi /etc/cron.d/salt-highstate

Insert:

0 * * * * root /usr/bin/salt -b 10% -G group:group1 state.highstate 30 * * * * root /usr/bin/salt -b 10% -G group:group2 state.highstate

The above will run the salt command for group1 at the top of the hour every hour and the salt command for group2 at the 30th minute of every hour. Both of these commands are using a batch size of 10% which will tell salt to only update 10% of the hosts in that group at a time. While this method might have some hosts in group1 being updated while group2 is getting started, overall it is fairly safe as it ensures that the highstate is only running on at most 20% of the infrastructure at a time.

One thing I advise it to make sure that you also segregate these highstates by server role as well. If you have a cluster of 10 webservers and only 2 database servers, all of those servers are split amongst group1 and group2; with the right timing both databases could be selected for a highstate at the same time. To avoid this you could either have your "group" grains be specific to the server roles or setup nodegroups that are specific to server roles.

An example of this would look like the following.

0 * * * * root /usr/bin/salt -b 10% -N webservers1 state.highstate 15 * * * * root /usr/bin/salt -b 10% -N webservers2 state.highstate 30 * * * * root /usr/bin/salt -b 10% -N alldbservers state.highstate

This article should give you a pretty good jump start on staggering highstates, or really any other salt function you want to perform. If you have implemented this same thing in another way I would love to hear it, feel free to drop your examples in the comments.


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

Creating jigsaw of image using gimp

Sat, 2014-05-17 08:19
Using GIMP any photo can be made to look like a jigsaw puzzle easily. Let us take the following image for an example.

Launch gimp and click on

File->open



Browse to location where the image is stored and open it in gimp.



Now click on

Filters->Render->Patterns->Jigsaw.



The following window will appear.



The number of tiles option lets us choose the number of horizontal and vertical divisions that are needed for the image, higher these numbers more the number of jigsaw pieces will appear on the image.

Bevel width allows us to choose the degree of slope of each pieces's edge

Highlight lets us choose how strongly should the pieces appear in the image.

Style of the jigsaw pieces can be set to either square or curved.

Click on OK after setting the required options.

The image should appear as below depending on what values were set in the options.




Categories: FLOSS Project Planets

Thank You India for making Narendra Modi as Prime Minister of India

Fri, 2014-05-16 22:05
Hello All, Firstly I would like to Congratulate Shri Narendra Modi for becoming Next Prime Minister of India. This Election was special, tough and very Surprising for not only Political parties but also People of India. Specially This election was toughest for BJP as BJP was struggling to get into the power from Last 10 […]
Categories: FLOSS Project Planets

CiviCRM – Open Constituent Management for Organisations

Thu, 2014-05-15 08:11

For our civic clients, the CRM of choice is CiviCRM – a free constituent management system specifically designed with organisations in mind.

CiviCRM in a nutshell

CiviCRM is essentially a lightweight relations management system, designed to easily integrate with organisations’ existing platforms, such as Drupal, WordPress or Joomla.

Vanilla CiviCRM offers the following list of features out-of-the-box:

  • Contact management
  • Contributions
  • Communications
  • Peer-To-Peer Fundraisers
  • Advocacy Campaigns
  • Events
  • Members
  • Reports
  • Case Management

In addition to this robust set of offerings, the CiviCRM community have developed and published upwards of a 100 extensions for their own needs. See full list at civicrm.org/extensions.

How can my organisation benefit from CiviCRM?

CRM in the traditional sense is focused on serving the needs of paying customers, where the main focus tends to shift towards money-related aspects in the relationship. This may not be an optimal solution for a non-profit organisation. As a piece of software born out of necessity, CiviCRM is custom built for the special needs of non-profits and organisations.

CiviCRM can be utilised as a monolithic solution for most resource management and communication, or it can be simply put to use as a single-purpose component i.e. as a contact database or a mailing list. The beauty of CiviCRM lies in its easy implementation and expandability.

Being free and open source software, CiviCRM users also enjoy all the benefits of free software such as rapid feature development rate, frequent updates, good documentation and no licensing fees. Click here to learn more about the benefits of open software.

As far as we know, Seravo is currently the only company in Finland with professional CiviCRM experience. We have clients who have benefited from using CiviCRM for the last 5 years. Currently the majority of the Finnish localisation project is authored by our staff.

CiviCRM dashboard in Finnish

Learn more

Visit civicrm.org for more information.

Presentation

We’ve recently published a presentation on CiviCRM in Finnish:

Categories: FLOSS Project Planets

To Serve Users

Wed, 2014-05-14 16:00

(Spoiler alert: spoilers regarding a 1950s science fiction short story that you may not have read appear in this blog post.)

Mitchell Baker announced today that Mozilla Corporation (or maybe Mozilla Foundation? She doesn't really say…) will begin implementing proprietary software by default in Firefox at the behest of wealthy and powerful media companies. Baker argues this serves users: that Orwellian phrasing caught my attention most.

In the old science fiction story, To Serve Man (which later was adapted for the The Twilight Zone), aliens come to earth and freely share various technological advances, and offer free visits to the alien world. Eventually, the narrator, who remains skeptical, begins translating one of their books. The title is innocuous, and even well-meaning: To Serve Man. Only too late does the narrator realize that the book isn't about service to mankind, but rather — a cookbook.

It's in the same spirit that Baker seeks to serve Firefox's users up on a platter to the MPAA, the RIAA, and like-minded wealthy for-profit corporations. Baker's only defense appears to be that other browser vendors have done the same, and cites specifically for-profit companies such as Apple, Google, and Microsoft.

Theoretically speaking, though, the Mozilla Foundation is supposed to be a 501(c)(3) non-profit charity which told the IRS its charitable purpose was: to keep the Internet a universal platform that is accessible by anyone from anywhere, using any computer, and … develop open-source Internet applications. Baker fails to explain how switching Firefox to include proprietary software fits that mission. In fact, with a bit of revisionist history, she says that open source was merely an “approach” that Mozilla Foundation was using, not their mission.

Of course, Mozilla Foundation is actually a thin non-profit shell wrapped around a much larger entity called the Mozilla Corporation, which is a for-profit company. I have always been dubious about this structure, and actions like this that make it obvious that “Mozilla” is focused on being a for-profit company, competing with other for-profit companies, rather than a charity serving the public (at least, in the way that I mean “serving”).

Meanwhile, I greatly appreciate that various Free Software communities maintain forks and/or alternative wrappers around many web browser technologies, which, like Firefox, succumb easily to for-profit corporate control. This process (such as Debian's iceweasel fork and GNOME's ephiphany interface to Webkit) provide an nice “canary in the coalmine” to confirm there is enough software-freedom-respecting code still released to make these browsers usable by those who care about software freedom and reject the digital restrictions management that Mozilla now embraces. OTOH, the one item that Baker is right about: given that so few people oppose proprietary software, there soon may not be much of a web left for those of us who stand firmly for software freedom. Sadly, Mozilla announced today their plans to depart from curtailing that distopia and will instead help accelerate its onset.

Related Links:

Categories: FLOSS Project Planets

Federal Appeals Court Decision in Oracle v. Google

Sat, 2014-05-10 09:33

[ Update on 2014-05-13: If you're more of a listening rather than reading type, you might enjoy the Free as in Freedom oggcast that Karen Sandler and I recorded about this topic. ]

I have a strange relationship with copyright law. Many copyright policies of various jurisdictions, the USA in particular, are draconian at best and downright vindictive at worst. For example, during the public comment period on ACTA, I commented that I think it's always wrong, as a policy matter, for copyright infringement to carry criminal penalties.

That said, much of what I do in my work in the software freedom movement is enforcement of copyleft: assuring that the primary legal tool, which defends the freedom of the Free Software, functions properly, and actually works — in the real world — the way it should.

As I've written about before at great length, copyleft functions primarily because it uses copyright law to stand up and defend the four freedoms. It's commonly called a hack on copyright: turning the copyright system which is canonically used to restrict users' rights, into a system of justice for the equality of users.

However, it's this very activity that leaves me with a weird relationship with copyright. Copyleft uses the restrictive force of copyright in the other direction, but that means the greater the negative force, the more powerful the positive force. So, as I read yesterday the Federal Circuit Appeals Court's decision in Oracle v. Google, I had that strange feeling of simultaneous annoyance and contentment. In this blog post, I attempt to state why I am both glad for and annoyed with the decision.

I stated clearly after Alsup's decision NDCA decision in this case that I never thought APIs were copyrightable, nor does any developer really think so in practice. But, when considering the appeal, note carefully that the court of appeals wasn't assigned the general job of considering whether APIs are copyrightable. Their job is to figure out if the lower court made an error in judgment in this particular case, and to discern any issues that were missed previously. I think that's what the Federal Circuit Court attempted to do here, and while IMO they too erred regarding a factual issue, I don't think their decision is wholly useless nor categorically incorrect.

Their decision is worth reading in full. I'd also urge anyone who wants to opine on this decision to actually read the whole thing (which so often rarely happens in these situations). I bet most pundits out there opining already didn't read the whole thing. I read the decision as soon as it was announced, and I didn't get this post up until early Saturday morning, because it took that long to read the opinion in detail, go back to other related texts and verify some details and then write down my analysis. So, please, go ahead, read it now before reading this blog post further. My post will still be here when you get back. (And, BTW, don't fall for that self-aggrandizing ballyhoo some lawyers will feed you that only they can understand things like court decisions. In fact, I think programmers are going to have an easier time reading decisions about this topic than lawyers, as the technical facts are highly pertinent.)

Ok, you've read the decision now? Good. Now, I'll tell you what I think in detail: (As always, my opinions on this are my own, IANAL and TINLA and these are my personal thoughts on the question.)

The most interesting thing, IMO, about this decision is that the Court focused on a fact from trial that clearly has more nuance than they realize. Specifically, the Court claims many times in this decision that Google conceded that it copied the declaring code used in the 37 packages verbatim (pg 12 of the Appeals decision).

I suspect the Court imagined the situation too simply: that there was a huge body of source code text, and that Google engineers sat there, simply cutting-and-pasting from Oracle's code right into their own code for each of the 7,000 lines or so of function declarations. However, I've chatted with some people (including Mark J. Wielaard) who are much more deeply embedded in the Free Software Java world than I am, and they pointed out it's highly unlikely anyone did a blatant cut-and-paste job to implement Java's core library API, for various reasons. I thus suspect that Google didn't do it that way either.

So, how did the Appeals Court come to this erroneous conclusion? On page 27 of their decision, they write: Google conceded that it copied it verbatim. Indeed, the district court specifically instructed the jury that ‘Google agrees that it uses the same names and declarations’ in Android. Charge to the Jury at 10. So, I reread page 10 of the final charge to the jury. It actually says something much more verbose and nuanced. I've pasted together below all the parts where the Alsup's jury charge mentions this issue (emphasis mine): Google denies infringing any such copyrighted material … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. … The copyrighted Java platform has more than 37 API packages and so does the accused Android platform. As for the 37 API packages that overlap, Google agrees that it uses the same names and declarations but contends that its line-by-line implementations are different … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. Google states, however, that the elements it has used are not infringing … With respect to the API documentation, Oracle contends Google copied the English-language comments in the registered copyrighted work and moved them over to the documentation for the 37 API packages in Android. Google agrees that there are similarities in the wording but, pointing to differences as well, denies that its documentation is a copy. Google further asserts that the similarities are largely the result of the fact that each API carries out the same functions in both systems.

Thus, in the original trial, Google did not admit to copying of any of Oracle's text, documentation or code (other than the rangeCheck thing, which is moot on the API copyrightability issue). Rather, Google said two separate things: (a) they did not copy any material (other than rangeCheck), and (b) admitted that the names and declarations are the same, not because Google copied those names and declarations from Oracle's own work, but because they perform the same functions. In other words, Google makes various arguments of why those names and declarations look the same, but for reasons other than “mundane cut-and-paste copying from Oracle's copyrighted works”.

For we programmers, this is of course a distinction without any difference. Frankly, programmers, when we look at this situation, we'd make many obvious logical leaps at once. Specifically, we all think APIs in the abstract can't possibly be copyrightable (since that's absurd), and we work backwards from there with some quick thinking, that goes something like this: it doesn't make sense for APIs to be copyrightable because if you explain to me with enough detail what the API has to, such that I have sufficient information to implement, my declarations of the functions of that API are going to necessarily be quite similar to yours — so much so that it'll be nearly indistinguishable from what those function declarations might look like if I cut-and-pasted them. So, the fact is, if we both sit down separately to implement the same API, well, then we're likely going to have two works that look similar. However, it doesn't mean I copied your work. And, besides, it makes no sense for APIs, as a general concept, to be copyrightable so why are we discussing this again?0

But this is reasoning a programmer can love but the Courts hate. The Courts want to take a set of laws the legislature passed, some precedents that their system gave them, along with a specific set of facts, and then see what happens when the law is applied to those facts. Juries, in turn, have the job of finding which facts are accurate, which aren't, and then coming to a verdict, upon receiving instructions about the law from the Court.

And that's right where the confusion began in this case, IMO. The original jury, to start with, likely had trouble distinguishing three distinct things: the general concept of an API, the specification of the API, and the implementation of an API. Plus, they were told by the judge to assume API's were copyrightable anyway. Then, it got more confusing when they looked at two implementations of an API, parts of which looked similar for purely mundane technical reasons, and assumed (incorrectly) that textual copying from one file to another was the only way to get to that same result. Meanwhile, the jury was likely further confused that Google argued various affirmative defenses against copyright infringement in the alternative.

So, what happens with the Appeals Court? The Appeals court, of course, has no reason to believe the finding of fact of the jury is wrong, and it's simply not the appeals court's job to replace the original jury's job, but to analyze the matters of law decided by the lower court. That's why I'm admittedly troubled and downright confused that the ruling from the Appeals court seems to conflate the issue of literal copying of text and similarities in independently developed text. That is a factual issue in any given case, but that question of fact is the central nuance to API copyrightiable and it seems the Appeals Court glossed over it. The Appeals Court simply fails to distinguish between literal cut-and-paste copying from a given API's implementation and serendipitous similarities that are likely to happen when two API implementations support the same API.

But that error isn't the interesting part. Of course, this error is a fundamental incorrect assumption by the Appeals Court, and as such the primary ruling are effectively conclusions based on a hypothetical fact pattern and not the actual fact pattern in this case. However, after poring over the decision for hours, it's the only error that I found in the appeals ruling. Thus, setting the fundamental error aside, their ruling has some good parts. For example, I'm rather impressed and swayed by their argument that the lower court misapplied the merger doctrine because it analyzed the situation based on the decisions Google had with regard to functionality, rather than the decisions of Sun/Oracle. To quote: We further find that the district court erred in focusing its merger analysis on the options available to Google at the time of copying. It is well-established that copyrightability and the scope of protectable activity are to be evaluated at the time of creation, not at the time of infringement. … The focus is, therefore, on the options that were available to Sun/Oracle at the time it created the API packages.

Of course, cropping up again in that analysis is that same darned confusion the Court had with regard to copying this declaration code. The ruling goes on to say: But, as the court acknowledged, nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result.

To go back to my earlier point, Google likely did write their own declaring code, and the code ended up looking the same as the other code, because there was no other way to implement the same API.

In the end, Mark J. Wielaard put it best when he read the decision, pointing out to me that the Appeals Court seemed almost angry that the jury hung on the fair use question. It reads to me, too, like Appeals Court is slyly saying: the right affirmative defense for Google here is fair use, and that a new jury really needs to sit and look at it.

My conclusion is that this just isn't a decision about the copyrightable of APIs in the general sense. The question the Court would need to consider to actually settle that question would be: “If we believe an API itself isn't copyrightable, but its implementation is, how do we figure out when copyright infringement has occurred when there are multiple implementations of the same API floating around, which of course have declarations that look similar?” But the court did not consider that fundamental question, because the Court assumed (incorrectly) there was textual cut-and-paste copying. The decision here, in my view, is about a more narrow, hypothetical question that the Court decided to ask itself instead: “If someone textually copies parts of your API implementation, are merger doctrine, scènes à faire, and de minimis affirmative defenses like to succeed?“ In this hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but a fair use defense might help you”.

However, on this point, in my copyleft-defender role, I don't mind this decision very much. The one thing this decision clearly seems to declare is: “if there is even a modicum of evidence that direct textual copying occurred, then the alleged infringer must pass an extremely high bar of affirmative defense to show infringement didn't occur”. In most GPL violation cases, the facts aren't nuanced: there is always clearly an intention to incorporate and distribute large textual parts of the GPL'd code (i.e., not just a few function declarations). As such, this decision is probably good for copyleft, since on its narrowest reading, this decision upholds the idea that if you go mixing in other copyrighted stuff, via copying and distribution, then it will be difficult to show no copyright infringement occurred.

OTOH, I suspect that most pundits are going to look at this in an overly contrasted way: NDCA said API's aren't copyrightable, and the Appeals Court said they are. That's not what happened here, and if you look at the situation that way, you're making the same kinds of oversimplications that the Appeals Court seems to have erroneously made.

The most positive outcome here is that a new jury can now narrowly consider the question of fair use as it relates to serendipitous similarity of multiple API function declaration code. I suspect a fresh jury focused on that narrow question will do a much better job. The previous jury had so many complex issues before them, I suspect that they were easily conflated. (Recall that the previous jury considered patent questions as well.) I've found that people who haven't spent their lives training (as programmers and lawyers have) to delineate complex matters and separate truly unrelated issues do a poor job at such. Thus, I suspect the jury won't hang the second time if they're just considering the fair use question.

Finally, with regard to this ruling, I suspect this won't become immediate, frequently cited precedent. The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle's next appeal will be quite weak, and the Appeals Court likely won't reexamine the question in any detail. In that outcome, very little has changed overall: we'll have certainty that API's aren't copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google's fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It's thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.

0This is of course true for any sufficiently simple programming task. I used to be a high-school computer science teacher. Frankly, while I was successful twice in detecting student plagiarism, it was pretty easy to get false positives sometimes. And certainly I had plenty of student programmers who wrote their function declarations the same for the same job! And no, those weren't the students who plagiarized.

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 15: Why Dear Watson

Thu, 2014-05-08 09:44

Myself, Bryan Lunduke, Jono Bacon, and Stuart Langridge present Bad Voltage, :

  • Bryan gave his yearly “Linux Sucks” talk at LinuxFest Northwest, and the rest of us take issue with his approach, his arguments, his data sources, and his general sense of being
  • The XBox 360: reviewed as a TV set-top box, not as a gaming console
  • The up and coming elementary OS: is it any good, and what do we think of it?
  • Our discussion of elementary OS raised a number of questions: Daniel Foré, leader of the project, talks about the goals of the OS and answers our queries
  • Community recap: your emails and forum posts and happenings in the Bad Voltage community

Listen to: 1×15: Why Dear Watson

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets

REACH FOR THE SKY, STRANGER!

Tue, 2014-05-06 21:27
Lostnbronx buys a quadcopter, and thinks about community.
Categories: FLOSS Project Planets

ICONOCLAST

Tue, 2014-05-06 21:27
Lostnbronx is sick of everybody, and they're probably sick of him.
Categories: FLOSS Project Planets

Disable / Password Protect Single User Mode / RHEL / CentOS / 5.x / 6.x

Sun, 2014-05-04 23:15
Hello All, If you have not protected Single User Mode with Password then it is big risk for your Linux Server, So protecting Single User Mode with Password is very important when it comes to security, Today in this article i will show you how you can protect Single User Mode with Password on RHEL […]
Categories: FLOSS Project Planets

Seperating pdf into single pages using pdfseperate in linux

Fri, 2014-05-02 05:24
pdfseperate: A tool than can be used to split a pdf document into individual pages or can be used to extract a set of pages as individual pages from a pdf document.

Let us say we have a document named hello.pdf, that has 100 pages and we need to extract the pages 20 to 22 as individual pages. We can use pdfseperate to achieve this.

$ pdfseperate -f 20 -l 30 hello.pdf foo%

After the execution of the command, we should have three files foo1.pdf,foo2.pdf and foo3.pdf which will be the 20th,21st and 22nd page of the hello.pdf document.

If pdfseperate is not available, we need to install the package poppler-utils, for debian based systems

$ sudo apt-get install poppler-utils

Categories: FLOSS Project Planets

The 500 Year Farm Manifesto Part 5

Wed, 2014-04-30 19:34

Human created climate change is a reality and we are all ready facing the consequences of it. I'm not going to lay down the arguments for it's voracity. I accept this reality even if dear reader doesn't. I feel we have postponed action to reduce carbon emissions in the hopes that some magic technological bullet would be developed to keep us from having to face the reality of using dead dinosaurs to make our modern lives possible.

The modern food systems uses carbon from oil to operate the machines that plow the ground, plant the seeds, harvest, process and ship it to your super market. More carbon is used in the store and by the consumer to purchase and transport it home. Oil derived chemical fertilizers are sprayed on the crops and machinery uses more to spray chemical pesticides and herbicides on the crops. The list seems almost never ending. All to deliver food of dubious quality.

The local food movement is a good start to helping short cut many of the steps involved. By eating local and in-season people are drastically reducing the amount of carbon required to deliver food to their plate. The goal of the 500 Year Farm is to directly market it's food to local consumers and restaurants and to provide nursery stock of locally adapted species of plants and animals to urban homesteaders. This not only reduces carbon usage but insures a more stable and resilient food economy.

Reduction isn't the solution to all the carbon problems, we've gone too far down the road for that. We now need to put carbon back into the ground in order reverse the effects of modern society on the earth. Using the ground breaking work of Alan Savory and his development of holistic pasture management techniques, 500 Year Farm will put more carbon into the ground, doing it's part in preventing and reversing man made desertification, helping restore the lungs of the planet.

Also, taking advantage of technologies that use resources efficiently, like rocket mass heaters, and low carbon building techniques will be employed to ensure the farms long term resiliency.      

Categories: FLOSS Project Planets

Beermiester

Wed, 2014-04-30 12:21
I've had a number of friends that have made home brewed beer, everything from one small batch to try it out, to people who brew all the beer they drink and have for years. It's something that I've wanted to try personally for a really long time. For one, it involves science and cooking, two things I've all ready incorporated into my life in many ways. Another reason it interests me is that it uses agricultural products that are possible to grow in my area to make a value added product and that is always something that can help a small farm stay profitable. Even if I don't brew for sale on the farm, partnering with local breweries to make special brews would even be a good idea and with all things the best producers are those that understand the process and appreciate and strive for a quality end product.

Recently a fellow Ingress player helped push me over the edge into trying it out and this post is about my first attempt, and success, at brewing beer.

I started out, at the urging of the aforementioned Ingress player, talking to the guys at Beer@home, a local brewing supply store that is a very short walk away from my work. They were very friendly and helpful. I was very impressed by their knowledge and selection. They had a few different options for brewing gear kits and went over each pro and con without showing signs of getting weary of my endless questions. So I picked the package I wanted and, having some overtime dollars, bought it. The only thing not included in the package was the brew kettle as there are different sizes, I wanted one big enough to do an entire 5 gallon batch so I also bought that.

The kit came with a copy of the book "How to Brew" by John J Palmer. The book is concise and informative and you don't have to read the entire thing to make your first batch. It has a primer section that has just enough info to get your head around the process. If you want to get super in depth on all the techniques and science of brewing it's in there too as well as trouble shooting and other reference materials.

The kit also included a choice of beer kits that include just about everything that is needed to make your first batch. The store had a large selection and after talking with the staff about the types of beer I like to drink I ended up selecting their "Foreman" kit. It's a brown English style brown ale.

I picked a Sunday, the day I keep free for my "projects" or whatever else I want to do, to brew my beer. After getting out all the equipment out and reading the recipe I realized there are two other items that were not included in the kit that I needed. The first was a thermometer and they did mention to me that I would need one but I forgot to grab it the day I bought everything. The second was a muslin bag for the steeping grains. I think this is an over-site of their kits to not include it with them. They also never mentioned it would be needed. After all, this was a partial grain kit so I don't know how you would make it without it. So I drove back down to the store, thankfully open on Sundays, and grabbed the missing items.

The directions where clear for the kit without being overly wordy or intimidating. To start, I boiled 1 gallon of water, per the instructions, put the grains in the steeping bag and added them to the boiling water that was then turned down to a low simmer. The only issue I had was the bag wanting to stick to the bottom of the pan, an issue likely brought about by the sugars wanting to caramelize. In hind sight I should have added enough water to submerge the grains while still holding off the bottom by suspending the bag off bakers twine.

After steeping the grains, I added the full volume of water recommended and brought it up to a nice rolling boil. Because of the volume of water it took close to an hour on the stove top for this to happen. I've seen people use turkey fryers to heat the pot and I suspect that would greatly reduce the time but also increases the all ready expensive costs of equipment. I see the stove top as livable but recommend you allow for this time in your day.

Once boiling, you start with your hop additions at the given intervals. This beer had four different types of hops added at four different times. I use the Ovo timer app on my android phone to time out the intervals. Just after the last hop addition, I added the malt extract that provides most of the ferment-able sugars for the beer. At this point it is called Wort and needs to be cooled as soon as possible to yeast pitching temperature.

Some people use an immersion chiller, a coil of copper tube, to run tap water through and cool the Wort. This was an added expense to an all ready expensive hobby and I opted for the alternative of an ice bath. I filled glass food storage containers with water and froze them to make large blocks of ice. I filled the bath tub with cold water, enough to go within 5 or so inches of the top of the kettle, and added the ice and set the brew kettle in the tub. Then I left it alone till the temp was under 80 degrees. This likely took longer than is ideal, almost an hour, and makes a firm argument as to why an immersion chiller is a good idea.

From this point on anything that comes into contact with the beer needs to be sanitized. The kit includes a product called Starsan that you mix with water to make an acid basted sanitizing solution. In the picture you can see the bucket filled with the sanitizer and all the equipment for siphoning the wort into the fermenter. I drained the the sanitizer from the bucket into the glass carboy, fermenter, and sanitized it. then I drained the sanitizer and siphoned the wort out of the brew kettle into the fermenter. After that I pitched the yeast and set the carboy in the basement to start the fermentation process.
After 24 hours I wasn't seeing much action in the fermenter and was worried I'd messed something up. The worry was for not however, because 24 hours after the the yeast erupted with activity and beer bubbled up into action and was left to sit in quiet, dark silence for two weeks. At this point I'd invested not only quite a bit of money, but quite a bit of time in the process and it was hard not to roll over in my head all the things I wasn't 100% sure I did right. We had a long cold snap that kept the fermentation temperature low, around 59 degrees, most the time. A little online research came back with the same answers time after time. A line from the book that came with the kit, "Relax, don't worry and have a homebrew". So I left it alone.

So, at this point the sugars have been converted into alcohol and it's time to put it in the bottle. A process that is pretty strait forward, sterialize everything again, put it in the bottling bucket, stir in some corn sugar and use the racking cane, a plastic tube with a push valve on the end of it, to fill up sanitized bottle with beer. Then cap it. The added sugar gives the yeast a boost to let it make CO2 and carbonate the beer. It takes about two weeks for this to happen after which the beer is ready to drink.

So...........what are the results?!

AMAZING, it's really good beer. It has a very nice head, a smooth drink with a firmly bitter finish. The bitterness isn't long lasting and is very refreshing. It has a solid mouth feel and honestly I couldn't have asked for a better first beer.


 I can't wait to try my next one! If you've been thinking about trying it and can afford the equipment required, I say go for it. I had a blast doing it and couldn't be more pleased with the results.

Categories: FLOSS Project Planets

Alfresco Email Document Action Module Released

Mon, 2014-04-28 08:04

Our Alfresco Email Documents Action project adds the ability to email documents directly from Alfresco Share via a Document Library  action. The project consists of two modules, a repository module that implements the action in Java and a Share client side module that implements the client side logic for calling the action remotely from Alfresco Share via Javascript.

Alfresco Email Document Action Share and Rep Module Features

The features of the modules are:

  • Ability to selectively enable a folder or a document item to be sent by email straight from Alfresco Share,
  • Ability to convert document items to PDF before sending,
  • Ability to attach an email template to a folder or a document item,
  • Ability to record each email sent to a custom data list
Alfreco Customisation & Development

Alfresco's out-of-the-box web client for the repository, Share, provides a good foundation for building out your custom extensions and add-ons. The lack of comprehensive documentation and the rapidly changing Surf framework on which Share is built can make Alfresco development slow and Share although Shares provides several extension points it is not as flexible as other platforms for content presentation. Nevertheless, Share provides many benefits and productivity gains over custom development. Contact us for your Alfresco customisation and development needs.

Source Code Repository 

The source code has been released under GPL and botht he source and binaries can be downloaded from the Alfresco Email Documents Poject page.

Categories: FLOSS Project Planets

Uncoloring ls

Fri, 2014-04-25 05:59
By default on every recent shell the output of ls is colorized. This is a great feature - but it makes using terminals that use a non-standard [not(background==black)] color-scheme awkward.  Things just disappear;  try reading directory name displayed in yellow on a yellow background.  It is difficult.
How this colorization gets setup in openSUSE is that that the ls command is aliased to "ls --color=auto".  You can see this aliasing using the alias command.
[fred@example ~]# alias
alias cp='cp -i'
alias l.='ls -d .* --color=auto'
alias ll='ls -l --color=auto'
alias ls='ls --color=auto'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'So a simple way to turn this off is: unlias ls
There you go - no more annoying colors!  But they will be back the next time you login. Is this alias created in ~/.bashrc  - nope. ~/.bash_profile - nope. /etc/bashrc - nope. /etc/profile - nope, errr.... well, sort of.  The script /etc/profile runs every script in /etc/profile.d which ends in .sh.  Within /etc/profile.d is ...drumroll... colorls.sh which indeed does alias ls='ls --color=auto'.
To disable colorization - the aliasing of ls upon login:
mv /etc/profile.d/colorls.sh /etc/profile.d/colorls.sh.xNow that the script does not end in .sh /etc/profile will not run it.
When you start a new interactive login[*1] shell bash runs: /etc/profile, then ~/.bash_profile, ~/.bash_login, and then ~/.profile.   Generally ~/.bash_profile will run ~/.bashrc, and generally ~/.bashrc will run /etc/bashrc.  And, of course, when /etc/profile runs it runs each of the scripts which match /etc/profile.d/*.sh.  It is almost comical; who says UNIX can't dance?

[*1] The shell distinguishes between login and secondary shells - a secondary shell is invoked by an existing shell, whereas a login shell is the first shell.  It also distinguishes between interactive and non-interactive shells - like those running a script.

Categories: FLOSS Project Planets

Hide Users / Login as “Other” user from Login Screen | Ubuntu 14.04 LTS Trusty Tahr

Thu, 2014-04-24 22:30
Hello All, This Article will show you how to Hide Users  and Login as “Other” user from Login Screen of Ubuntu 14.04 LTS Trusty Tahr. I have installed Ubuntu 14.04 LTS Trusty Tahr on First day of its release, As of now, Not a single crash took place which is great indication of Stability. I […]
Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 14: Cloudy Donkey Mascots

Thu, 2014-04-24 10:03

Myself, Bryan Lunduke, Jono Bacon, and Stuart Langridge present Bad Voltage, in which the cloud shows up a lot this week, in the following ways:

  • Personal cloud storage: what do we use, what do we like, and what do we think? Companies, personal clouds, servers, tarsnap and Dropbox
  • Christian Schaller, who manages the team at Red Hat producing their “Fedora Workstation” concept, wrote up what “Workstation” is and now comes to answer questions about it
  • Breaking Down the Bullshit: we look at “the cloud” as a whole. What does it mean? And why?
  • I review the Pebble watch: as one of its original Kickstarter backers, I’ve now had the Pebble for long enough to form an opinion
  • Community recap: your emails and forum posts and happenings in the Bad Voltage community

Listen to: 1×14: Cloudy Donkey Mascots

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets