Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 1 day 1 hour ago

Different ways to get help with Foresight

Tue, 2014-06-03 06:24

There are few ways to get help with Foresight and get help with various issues you might hit.


  • JIRA – Official bug tracker

The problem can be that some users never knows when a issue is reported, if you don’t visit the tracker or subscribe on new entries.


  • Irc – Realtime chatting

A great way to get help right away, but also depends on users are there and awake.


Still kinda new way to get help. But works real fine and also visible in more places and users can see that someone asked something. Easy to follow from RSS feed too.


Probably the easiest way to get help, as it probably sends a notification to users that have “notify on” inside that group.


  • Forum – Unofficial forum

Those who still want to use forum and write all kinds of stuff.


Mailinglist – E-mail to our mailinglists

Some users might want to send a mail instead. Harder to follow for users that doesn’t subscribe on it though.


We always recommend Jira in first place, but you can also post that link from Jira in ask foresight or G+ to get users to notify about it.

Common questions, we recommend ask foresight, G+ or Forum.

We usually test packages and talks about development in irc. Mostly in devel channel. We use mailinglist for keep on track on ongoing development.

Fastest way to get help, use G+ or ask foresight for non critical issues.

I might be wrong in some stuff here, but this is my personal feeling about ways to get help and how fast we notice about the problem/issue.

Categories: FLOSS Project Planets


Mon, 2014-06-02 22:19
Lostnbronx has an idea for a piece of software, but doubts it will ever come into existence.
Categories: FLOSS Project Planets

Tablet vs. Laptop vs. Combo

Fri, 2014-05-30 17:32
So, my latest personal technology debate.  Which should be the next system that I invest in for my family members consumption.  Let me state that if I had more money than sense, I would invest for everyone (4 of us) to have Apple products across the board, but that option is way too expensive.  As well the two children should be taking theirs to public school and a loss of that kind of value would be crushing.

As  my primary goal, these new devices should replace older machines and multiple under-powered devices.  In an attempt to consolidate for the kids onto a specific device/platform by which they could accomplish school work.  That choice being more than adequate for their mother who does email and navigation mostly, and for myself something grander than a smartphone, but would clearly be under-powered for vitalization or some of the other purposes that require I have a modern desktop performance system.

Gaming is not coming into the equation at all.  This also drops some of the performance requirements.  I can add that I have an affinity for Linux over all other operating systems for the management of security. As I have mentioned before, Apple would be my mainstay if not so expensive, so I do not shy away from the most productive choice just because it is not my ideal.  In this comparison there are devices which are Win8, Android, and basic systems I would install Linux on.

So I started a quest with a budget.  Since that is probably the most limiting factor, let's discuss it as secondary.  €400-500 per person is what my target is.  This puts me in the high end tablet range or low end laptop.  My first inclination was to go towards the tablet, so that it would be as portable as possible with the longest battery life.  But I personally need to be able to use a hardware keyboard.  So, I know that a bluetooth keyboard can be added to most any device, I didn't want to have yet another thing to try and keep charged as a separate thing.  This has all gotten me down to looking at the following devices closely.

Asus Transformer Pad TF701T (Android 4.3)
Microsoft Surface 2 (Win8 RT)
Samsung Galaxy Note 10.1 (2014 edition) (Android 4.4)
Lenovo Miix 2 10  (Win8.1)
Asus Transformer Book T100 (Win8.1)

Reviewing point I used were apps, operating system, size, ports and expandability.  Due to some software limitations (like proper office suite missing for Android as well as limitations for Windows 8 RT to windows store only) leaned me towards to the new entries in the "convertible" space.  The Asus Transformer Book T100 and the Lenovo Miix 2 10.  With very similar specs, I've made a short list of the actual differences.

Lenovo Miix 2 10    (+full hd screen, -non standard micro-usb charger, - usb 2.0 only)
+Full HD screen
+rear camera
+larger kb keys
- kb numbers out of alignment
- magnet attach, with 3 fixed positions (no flexible hinge)

Asus Transformer Book T100 (Win8.1)
-HD screen-kb a 95% like netbook
-no rear camera
+physical locking screen to dock
+kb #'s align

I had hoped to find a device that also had built in 3G.  It would seem that that module included on devices in my price range are not common at all.

The Asus came in a bit cheaper, but not really enough to polarize my decision.  So based on these few differences between the two systems I made the choice to go with the Transformer Book T100 based on these points:

  1. USB 3.0.  No matter how much I try, I will never be able to make USB 2.0 ports become a USB 3.0 port.
  2. I found a cover for the T100 that can let it function in all the best tablet modes (stand horizontal and landscape) that does NOT block the normal keyboard dock section of the tablet. Coodio Smart Asus Transformer Book T100TA.
  3. Linux adaptation has already started.
  4. Physical latching mechanism for the keyboard to attach.
I'll bet getting the devices very soon as I have ordered them just recently.  I've been watching Win8 "howto" videos on youtube to bone up on the UI as I have not operated a Win8 machine as a daily driver yet.  I'll post back here again after they arrive to follow up on my impression with the machine after some use.

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
Categories: FLOSS Project Planets

Latest Firefox versions available

Thu, 2014-05-29 09:59

I’ve been building Firefox 29.0.1 and 30.0b8 in foresighters repo.

To install any of them,  open terminal and write:

32bit sudo conary install

or for beta

sudo conary install 64bit sudo conary install

or for beta

sudo conary install
Categories: FLOSS Project Planets

Configure and connect tata photon plus in debian 7

Tue, 2014-05-27 04:12
The 3G dongle of tata photon+ (Huwaei EC1260) might not work by default in debian 7, and other linux distros. Here is how to get it working.

The first step is to install the packages, if not already installed.

usb-modeswitch usb-modeswitch-data

sudo dpkg -i usb-modeswitch sudo dpkg -i usb-modeswitch-data

Once the two packages have been installed successfully, next we need to install wvdial

sudo dpkg -i wvdial

After installing the pakcage, insert the dongle.

Now open a terminal and type the command

$ sudo wvdialconf /etc/wvdial.conf

We will see an output as below

Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. Modem Port Scan<*1>: S0 S1 S2 S3 ttyUSB0<*1>: ATQ0 V1 E1 -- OK ttyUSB0<*1>: ATQ0 V1 E1 Z -- OK ttyUSB0<*1>: ATQ0 V1 E1 S0=0 -- ATQ0 V1 E1 S0=0 ttyUSB0<*1>: ATQ0 V1 E1 &C1 -- OK ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 -- OK ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- OK ttyUSB0<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB0<*1>: Speed 9600: AT -- OK ttyUSB0<*1>: Max speed is 9600; that should be safe. ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- ATQ0 V1 E1 &C1 &D2 +FCLASS=0 ttyUSB0<*1>: failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- OK ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB0. Modem configuration written to /etc/wvdial.conf. ttyUSB0: Speed 9600; init "ATQ0 V1 E1 &C1 &D2 +FCLASS=0" ttyUSB2: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0

The fourth line from below recognizes our dongle as the device /dev/ttyUSB0

Next we need to configure the wvdial application to allow it use the dongle to connect to the internet.

For doing this we need to open the wvdial.conf file located in /etc/wvdial.conf

$ gedit /etc/wvdial.conf’

By default the file will look some thing like below

[Dialer Defaults] Init1 = ATZ Init2 = ATQ0 V1 E1 &C1 &D2 +FCLASS=0 Modem Type = Analog Modem ISDN = 0 New PPPD = yes ; Phone = < Target phone number> Modem = /dev/ttyUSB0 ;Username = ;Password = Baud = 9600

Delete the ";" at the beginning of the lines "Phone","Username", and "Password". and replace the three lines with

Phone = #777 Username = internet Password = internet

Then add the following lines to the file at the end.

Init3 = AT+CRM=1 Stupid Mode = 1

The edited wvdial.conf looks as below.

[Dialer Defaults] Init1 = ATZ Init2 = ATQ0 V1 E1 &C1 &D2 +FCLASS=0 Modem Type = Analog Modem ISDN = 0 New PPPD = yes Phone = #777 Modem = /dev/ttyUSB0 Username = internet Password = internet Baud = 9600 Init3 = AT+CRM=1 Stupid Mode = 1

Save the file and close it. Now go back to the terminal and type the command

$ wvdial --> WvDial: Internet dialer version 1.61 --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT ^HRSSILVL:80 ~[7f]}#@!}!})} }9}"}&} } } } }#}%B#}%}%}&@}*vO}'}"}(}"m.~~[7f]}#@!}!}*} }9}"}&} } } } }#}%B#}%}%}&@}*vO}'}"}(}"!C~ --> Carrier detected. Starting PPP immediately. --> Starting pppd at Tue May 27 14:06:08 2014 --> Pid of pppd: 13443 --> Using interface ppp0 --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> local IP address --> pppd: (��[08]@��[08][08]��[08] --> remote IP address --> pppd: (��[08]@��[08][08]��[08] --> primary DNS address --> pppd: (��[08]@��[08][08]��[08] --> secondary DNS address --> pppd: (��[08]@��[08][08]��[08]

Now open a browser and start browsing.

Note that before running the command wvdial in the last step, turn off mobile broadband in the gui mode.
Categories: FLOSS Project Planets

Transferring a .bin from openSUSE to U-Boot, or, Rightly Configuring TFTP on openSUSE

Mon, 2014-05-26 12:37

After spending a *lot* of time figuring out why I could not transfer a standalone binary to u-boot running on my BeagleBone Black, I finally discovered it was a firewall issue. This post is to save anyone in the future from suffering the same nightmare as I just went through on openSUSE 13.1.

The Problem:

I needed to put a .bin on my BBB which has U-Boot. The available options are:

Transferring the S-Record

SREC is the hex encoding of binary data generated on compilation. To load this, U-Boot provides the `loads` command at its console. You just need to pass the ASCII-Hex data from the .srec to the serial console (see this). The problem is, the speed of sending this data must be okay with the U-Boot console. Gave me a `data abort` message and my board reset.

Using TFTP

Better option: tftp. Have static IP setup for the host and the board (set env vars ipaddr and serverip on u-boot) and call tftp. It gave me this:

U-Boot# tftp link up on port 0, speed 100, full duplex Using cpsw device TFTP from server; our IP address is Filename 'hello_world.bin'. Load address: 0x80200000 Loading: T T T T T T T T T T T T T T T Abort

(**T = Timeout.**)


TFTP uses UDP port 69 for transfers. I needed to explicitly check “Open port in firewall” from the TFTP server config from YaST and add port 69 to Firewall->Allowed Services->UDP Ports.


Categories: FLOSS Project Planets

X-Loader / MLO With a Custom Payload

Sun, 2014-05-25 06:34

X-Loader (MLO) (u-boot/common/spl/spl.c, responsible for eMMC initialization and executing the u-boot binary) first parses the image header info (u-boot/common/spl/spl.c:spl_parse_image_header) which effectively does this:

if (magic number of header == IH_MAGIC) { set spl_image to the detected image } else { fill in spl_image assuming u-boot.bin } ... call spl_image->entry_point()

IH_MAGIC is set to 0×27051956, the default magic number when creating an image with mkimage. This image can be called from within u-boot at the u-boot command line. By default, the SPL assumes a `uImage` payload and if not found, tries to launch u-boot.

Categories: FLOSS Project Planets

error: function declaration isn’t a prototype [-Werror=strict-prototypes]

Sun, 2014-05-25 00:55
While compiling kernel modules, or any c program, we might at times come across the error

error: function declaration isn’t a prototype [-Werror=strict-prototypes]

This error is being shown for the function declaration

void hello();

Though the syntax looks fine, the compiler is expecting us to pass "void" in case we are not passing any arguments to the function. That is we need to modify the above function to

void hello(void); After the above change, we should not get the above error.
Categories: FLOSS Project Planets

Setup Redis Failover with Redis Sentinel

Fri, 2014-05-23 16:23

Recently I’ve been playing with redis, to study as an alternative for memcached for one project. One of my main requirements was to implement a good failover solution for a key-value memory database, actually you’ve many ways to do it from code and memcached (doing get/set checking the availability of the servers) or better solution use the repcached patch for memcached, the first it’s not a clean solution at all and I was not very convinced with repcached. After get more involved in all the features that redis can offer, one of the interesting features is the persistence on disk. Redis stores all in memory key/value pairs on disk, so in case of failure after recovery you’ll get the last data stored from the last snapshot in memory. Note that the store to disk is not an operation effected on the fly, so you can lose some data, although redis offers different kind of setup to store on disk it’s important that you understand how it works. Anyway remember you’re working with a memory key-value cache solution, so it’s not the solution to make persistent critical data. A good read that I recommend to understand how persistence works:

One of the other interesting features that I really appreciate of this solution, is the possibility to work with different data structures like lists, hashes, sets and sorted sets. So you’ve more flexibility to work to store different values on cache on the same key and have a support for native data types provided from the client library of your programming language used. You can take a look there to check the different data structures used in redis:

After this small introduction of some tips that why I choose redis, now I’ll talk about the failover solution. Redis supports master-slave asynchronous replication and sentinel will provides the failover, which comes from redis version 2.6, but from the project documentation they recommend the version shipped with redis 2.8, seems like they did very important enhances with the last version. Sentinel is a distributed system, which the different processes are communicated between them over messages and using the different protocols to elect a new master and inform the address of the current master of the cluster to the client.

We’ll run sentinel in our systems as a separate daemon listening in a different port, which will communicate with the other sentinels setup on the cluster to alert in event of a node failure and choose a new master. Sentinel will change our configuration files of our servers just to attach a recovered node on the cluster (setup as slave) or promote a slave as a master. The basic process to choose a new master basically is this:

1.- One sentinel node detects a server failure on the cluster in the number of milliseconds setup in the directive “down-after-milliseconds“. At this moment this sentinel node mark this instance as subjectively down (SDOWN).

2.- When the enough number of sentinels agreed is reached about this master failure , then this node is marked as objectively down (ODOWN), then the failover trigger is processed. The number of sentinels it’s setup for master.

3.- After the trigger failover, it’s not still enough to perform a failover, since it’s subject to a quorum process and at least a majority of Sentinels must authorized the Sentinel to failover.

Basically we’ll need a minimum of three nodes in our cluster to setup our redis failover solution. In my case I choose to use two redis servers (master & slave) both running sentinel, and one third node running just sentinel for the quorum process. For more information about the failover process and sentinel you can check the official documentation:

After this basic tips about how it works redis & sentinel, we can begin with the setup. For this environment I used a total of three servers running Ubuntu 14.04. All that I need to do is install redis-server from repositories. Note if you’re using other GNU/Linux distribution or an older Ubuntu version you’ll need to compile and install by hand.

- Setup for redis sentinels (nodes 1,2,3) /etc/redis/sentinel.conf:

# port <sentinel-port> # The port that this sentinel instance will run on port 26379 daemonize yes pidfile /var/run/redis/ loglevel notice logfile /var/log/redis/redis-sentinel.log # Master setup # sentinel parallel-syncs <master-name> <numslaves> # Minimum of two sentinels to declare an ODOWN sentinel monitor mymaster 6379 2 # sentinel down-after-milliseconds <master-name> <milliseconds> sentinel down-after-milliseconds mymaster 5000 # sentinel failover-timeout <master-name> <milliseconds> sentinel failover-timeout mymaster 900000 # sentinel parallel-syncs <master-name> <numslaves> sentinel parallel-syncs mymaster 1 # Slave setup sentinel monitor resque 6379 2 sentinel down-after-milliseconds resque 5000 sentinel failover-timeout resque 900000 sentinel parallel-syncs resque 4

- Create init scripts for sentinels (nodes 1,2,3) /etc/init.d/redis-sentinel:

#! /bin/sh ### BEGIN INIT INFO # Provides: redis-sentinel # Required-Start: $syslog $remote_fs # Required-Stop: $syslog $remote_fs # Should-Start: $local_fs # Should-Stop: $local_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: redis-sentinel - Persistent key-value db # Description: redis-sentinel - Persistent key-value db ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/redis-sentinel DAEMON_ARGS=/etc/redis/sentinel.conf NAME=redis-sentinel DESC=redis-sentinel RUNDIR=/var/run/redis PIDFILE=$RUNDIR/ test -x $DAEMON || exit 0 if [ -r /etc/default/$NAME ] then . /etc/default/$NAME fi . /lib/lsb/init-functions set -e case "$1" in start) echo -n "Starting $DESC: " mkdir -p $RUNDIR touch $PIDFILE chown redis:redis $RUNDIR $PIDFILE chmod 755 $RUNDIR if [ -n "$ULIMIT" ] then ulimit -n $ULIMIT fi if start-stop-daemon --start --quiet --umask 007 --pidfile $PIDFILE --chuid redis:redis --exec $DAEMON -- $DAEMON_ARGS then echo "$NAME." else echo "failed" fi ;; stop) echo -n "Stopping $DESC: " if start-stop-daemon --stop --retry forever/TERM/1 --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON then echo "$NAME." else echo "failed" fi rm -f $PIDFILE sleep 1 ;; restart|force-reload) ${0} stop ${0} start ;; status) echo -n "$DESC is " if start-stop-daemon --stop --quiet --signal 0 --name ${NAME} --pidfile ${PIDFILE} then echo "running" else echo "not running" exit 1 fi ;; *) echo "Usage: /etc/init.d/$NAME {start|stop|restart|force-reload|status}" &gt;&amp;2 exit 1 ;; esac exit 0

- Give execution permission on the script:

# chmod +x /etc/init.d/redis-sentinel

- Start the script automatically at boot time:

# update-rc.d sentinel defaults

- Change owner & group for /etc/redis/ to allow sentinel change the configuration files:

# chown -R redis.redis /etc/redis/

- On node 3 I’ll not use redis-server, so I can remove the init script:

# update-rc.d redis-server remove

- Edit the configuration of redis server on nodes 1,2 (/etc/redis/redis.conf), with the proper setup for your project. The unique requirement to work with seninel it’s just to setup the proper ip address on bind directive. All the directives are commented on the file and are very clear, so take your time to adapt redis to your project.

- Connecting to our redis cluster:

Now we’ve our redis cluster ready to store our data. In my case I work with Perl and currently I’m using this library: which you can install using the cpan tool. Note the version coming from ubuntu repositories (libredis-perl) it’s quite old and doesn’t implement the sentinel interface, so it’s better to install the module from cpan.

So to connect to our cluster as documented on the client library I used the next chain:

my $cache = Redis->new(sentinels => [ "redis1:26379", "redis2:26379", "node3:26379" ], service => 'mymaster' );

So basically the library will tries to connect with the different sentinel servers and get the address of the current master redis servers which will get and store the data in our system.

Another solution instead to connect from our scripts to the different sentinel servers, is use haproxy as backend and as a single server connection for our clients. HaProxy should check on the different redis servers the string “role:master” and redirect all the requests to this server.

Take a look on the library documentation for your programming language used in your project. The different clients currently supported by redis are listed here:

- Sources:

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 16: Forgotten to be Right

Thu, 2014-05-22 11:25

From the Bad Voltage site:

Myself, Bryan Lunduke, Jono Bacon, and Stuart Langridge present Bad Voltage, the greatest podcast in the history of this or any other universe. In this episode:

Listen to: 1×16: Forgotten to be Right

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.


Categories: FLOSS Project Planets

How to launch .jar files using nautilus

Wed, 2014-05-21 14:52

This is not nautilus specific issue and will work in more tools (like other file manager, xdg-open in cli etc)

Create a run-jar.desktop in your ~/.local/share/applications/ directory with the following content:

[DesktopEntry] Encoding=UTF-8 Type=Application Exec=java -jar %f Icon=java Name=run-jar Name[zh_CN]=run-jar Comment=Run the jar file Comment[zh_CN]=运行 JAR 文件

Now when you open the file’s property dialog and go to open with tab, you can see run-jar mentioned in ‘show more app’.

To make run-jar the default action, use nautilus ‘set default’ button or the type the following command in a terminal:

xdg-mime query default application/x-java-archive

The mime type can be found with the command:

xdg-mime query filetype my_shiny_app.jar


There is other ways to do it too, like creating a nautilus script. But this feels like a better way though.

Categories: FLOSS Project Planets

The difference between an ‘akmod’ and ‘kmod’

Wed, 2014-05-21 14:03


A ‘kmod’ (kernel driver module) is the pre-compiled, low-level software interface between the kernel and a driver. It gets loaded (into RAM) and merged into the running kernel. Linux kmods are specific to one and only one kernel, and will not work (nor even load) for any other kernel.

Advantages: Pre-Compiled – no need to fool around with compiling, compilers, *-devel packages and other associated overhead.

Disadvantages: updating and re-booting into a new kernel without updating the kmod(s) will result in loss of functionality and inherent delays in updating kmods after kernel updates.

akmods (similar to dkms) is a solution to the problem of some kernel modules depending on specific versions of a kernel. As you start your computer, the akmod system will check if there are any missing kmods and if so, rebuild a new kmod for you. Akmods have more overhead than regular kmod packages as they require a few development tools such as gcc and automake in order to be able to build new kmods locally. If you think you’d like to try akmods, simply replace kmod with akmod

With akmod you don’t have to worry about kernel updates as it recreates the driver for the new kernel on boot. With kmod you have to wait until a matching kmod is available before installing the kernel update.

Advantages: obvious.

Disadvantages: HDD space required for compilers and *-devel packages; unforseen/uncorrectable driver problems that cannot be resolved by the automatic tools.

Categories: FLOSS Project Planets


Tue, 2014-05-20 04:52
Lostnbronx breaks a bottle of beer, and contemplates one's duty to oneself.
Categories: FLOSS Project Planets

Anyone interested to get updates for FL:2-devel ?

Mon, 2014-05-19 10:44

Hello all Foresight users.

I wonder if anyone is interested to still get some updates to current fl:2-devel repo? If so, leave a tiny comment and will put in some time to update some regular packages in near future.

As we all waiting for F20, so we are not sure how many users are left with latest Foresight today…..

Categories: FLOSS Project Planets

Using Saltstack to update all hosts, but not at the same time

Mon, 2014-05-19 00:20

Configuration management and automation tools like SaltStack are great, they allow us to deploy a configuration change to thousands of servers with out much effort. However, while these tools are powerful and give us greater control of our environment they can also be dangerous. Since you can roll out a configuration change to all of your servers at once, it is easy for that change to break all of your servers at once.

In today's article I am going to show a few ways you can run a SaltStack "highstate" across your environment, and how you can make those highstate changes a little safer by staggering when servers get updated.

Why stagger highstate runs

Let's imagine for a second that we are running a cluster of 100 webservers. For this webserver cluster we are using the following nginx state file to maintain our configuration, ensure that the nginx package is installed and the service is running.

nginx: pkg: - installed service: - running - watch: - pkg: nginx - file: /etc/nginx/nginx.conf - file: /etc/nginx/conf.d - file: /etc/nginx/globals /etc/nginx/globals: file.recurse: - source: salt://nginx/config/etc/nginx/globals - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True /etc/nginx/nginx.conf: file.managed: - source: salt://nginx/config/etc/nginx/nginx.conf - user: root - group: root - mode: 640 /etc/nginx/conf.d: file.recurse: - source: salt://nginx/config/etc/nginx/conf.d - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True

Now let's say you need to deploy a change to the nginx.conf configuration file. Making the change is pretty straight forward, we can simply change the source file on the master server and use salt to deploy it. Since we listed the nginx.conf file as a watched state, SaltStack will also restart the nginx service for us after changing the config file.

To deploy this change to all of our servers we can run a highstate from the master that targets every server.

# salt '*' state.highstate

One of SaltStack's strengths is the fact that it performs tasks in parallel across many minions. While that is a useful feature for performance, it can be a bit of problem when running a highstate that restarts services across all of your minions.

The above command will deploy the configuration file to each server and restart nginx on all servers. Effectively bringing down nginx on all hosts at the same time, even if it is for just a second that restart is probably going to be noticed by your end users.

To avoid situations that bring down a service across all of our hosts at the same time, we can stagger when hosts are updated.

Staggering highstates Ad-hoc highstates from the master

Initiating highstates is usually either performed ad-hoc or via a scheduled task. There are two ways to initiate an ad-hoc highstate, either via the salt-call command on the minion or the salt command on the master. Running the salt-call command on each minion manually naturally avoids the possibility of restarting services on all minions at the same time as it only affects the minion where it is run from. The salt command on the master however can if given the proper targets be told to update all hosts, or only a subset of hosts at a given time.

The most common method of calling a highstate is the following command.

# salt '*' state.highstate

Since the above command runs the highstate on all hosts in parallel this will not work for staggering the update. The below examples will cover how to use the salt command in conjunction with SaltStack features and minion organization practices that allow us to stagger highstate changes.

Batch Mode

When initiating a highstate from the master you can utilize a feature known as batch mode. The --batch-size flag allows you to specify how many minions to run against in parallel. For example, if we have 10 hosts and we want to run a highstate on all 10 but only 5 at a time. We can use the command below.

# salt --batch-size 5 '*' state.highstate

The batch size can also be specified with the -b flag. We could perform the same task with the next command.

# salt -b 5 '*' state.highstate

The above commands will tell salt to pick 5 hosts, run a highstate across those hosts and wait for them to finish before performing the same task on the next 5 hosts until it has run a highstate across all hosts connected to the master.

Specifying a percentage in batch mode

Batch size can take either a number or a percentage. Given the same scenario, if we have 10 hosts and we want to run a highstate on 5 at a time. Rather than giving the batch size of 5 we can give a batch size of 50%.

# salt -b 50% '*' state.highstate Using unique identifiers like grains, nodegroups, pillars and hostnames

Batch mode picks which hosts to update at random, you may yourself wanting to upgrade a specific set of minions first. Within SaltStack there are several options for identifying a specific minion, with some pre-planning on the organization of our minions we can use these identifiers to target specific hosts and control when/how they get updated.

Hostname Conventions

The most basic way to target a server in SaltStack is via the hostname. Choosing a good hostname naming convention is important in general but when you tie in configuration management tools like SaltStack it helps out even more (see this blog post for an example).

Let's give another example where we have 100 hosts, and we want to split our hosts into 4 groups; group1, group2, group3 and group4. Our hostname will follow the convention of webhost<hostnum>.<group> so the first host in group 1 would be

Now that we have a good naming convention if we want to roll-out our nginx configuration change and restart to these groups one by one we can do so with the following salt command.

# salt 'webhost*group1*' state.highstate

This command will only run a highstate against hosts that have a hostname that matches the 'webhost*group1*' pattern. Which means that only group1's hosts are going to be updated with this run of salt.


Sometimes you may find yourself in a situation where you cannot use the hostname to identify classes of minions and the hostnames can't easily be changed, for whatever reasons. If descriptive hostnames are not an option than one alternate solution for this is to use nodegroups. Nodegroups are an internal grouping system within SaltStack that will let you target groups of minions by a specified name.

In the example below we are going to create 2 nodegroups for a cluster of 6 webservers.

Defining a nodegroup

On the master server we will define 2 nodegroups, group1 and group2. To add these definitions we will need to change the /etc/salt/master configuration file on the master server.

# vi /etc/salt/master


##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # # group1: ',, and bl*' # group2: 'G@os:Debian and'

Modify To:

##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # group1: ', and' group2: ', and'

After modifying the /etc/salt/master we will need to restart the salt-master service

# /etc/init.d/salt-master restart Targeting hosts with nodegroups

With our nodegroups defined we can now target our groups of minions by passing the -N <groupname> arguments to the salt command.

# salt -N group1 state.highstate

The above command will only run the highstate on minions within the group1 nodegroup.


Defining unique grains is another way of grouping minions. Grains are kind of like static variables for minions in SaltStack; by default grains will contain information such as network configuration, hostnames, device information and OS version. They are set on the minions during start time and they do not change, this makes them a great candidate to use to identify groups of minions.

To use grains to segregate hosts we must first create a grain that will have different values for each group of hosts. To do this we will create a grain called group the value of this grain will be either group1 or group2. If we have 10 hosts, 5 of those hosts will be given a value of group1 and the other 5 will be given a value of group2.

There are a couple of ways to set grains, we can do it either by editing the /etc/salt/minion configuration file or the /etc/salt/grains file on the minion servers. I personally like putting grains into the /etc/salt/grains file and that's what I will be showing in this example.

Setting grains

To set our group grain we will edit the /etc/salt/grains file.

# vi /etc/salt/grains


group: group1

Since grains are only set during start of the minion service we will need to restart the salt-minion service.

# /etc/init.d/salt-minion restart Targeting hosts with grains

Now that our grain is set we can target our groups using the -G flag of the salt command.

# salt -G group:group2 state.highstate

The above command will only run the highstate function on minions where the grain group is set to group2

Using batch-size and unique identifiers together

At some point, after creating nodegroups and grouping grains you may find that you still want to deploy changes to only a percentage of those minions.

Luckily we can use --batch-size and nodegroup and grain targeting together. Let's say you have 100 webservers, and you split your webservers across 4 nodegroups. If you spread out the hosts evenly each nodegroup would have 25 hosts within it, but this time restarting all 25 hosts is not what you want. Rather you would prefer to only restart 5 hosts at a time, you can do this with batch size and nodegroups.

The command for our example above would look like the following.

# salt -b 5 -N group1 state.highstate

This command will update the group1 nodegroup, 5 minions at a time.

Scheduling updates

The above examples are great for ad-hoc highstates across your minion population, however that only fixes highstates being pushed manually. By scheduling highstate runs, we can make sure that hosts get the proper configuration automatically without any human interaction, but again we have to be careful with how we schedule these updates. If we simple told each minion to update every 5 minutes, those updates would surely overlap at some point.

Using Scheduler to schedule updates

The SaltStack scheduler system is a great tool for scheduling salt tasks; especially the highstate function. You can configure scheduler in SaltStack two ways, by appending the configuration to the /etc/salt/minion configuration file on each minion or by setting the schedule configuration as a pillar for each minion.

Setting the configuration as a pillar is by far the easiest, however the version of SaltStack I am using 0.16 has a bug where setting the scheduler configuration in the pillar does not work. So the example I am going to show is the first method. We will be appending the configuration to the /etc/salt/minion configuration file, we are also going to use SaltStack to deploy this file as we might as well tell SaltStack how to manage itself.

Creating the state file

Before adding the schedule we will need to create the state file to manage the minion config file.

Create a saltminion directory

We will first create a directory called saltminion in /srv/salt which is the default directory for salt states.

# mkdir -p /srv/salt/saltminion Create the SLS

After creating the saltminion directory we can create the state file for managing the /etc/salt/minion configuration file. By naming the file init.sls we can reference this state as saltminion in the top.sls file.

# vi /srv/salt/saltminion/init.sls


salt-minion: service: - running - enable: True - watch: - file: /etc/salt/minion /etc/salt/minion: file.managed: - source: salt://saltminion/minion - user: root - group: root - mode: 640 - template: jinja - context: saltmaster: {% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above state file might look a bit daunting but it is pretty easy, the first section ensures that the salt-minion service is running and enabled. It also watched the /etc/salt/minion config file and if it changes than salt will restart the service. The second section is where things get a bit more complicated. The second section manages the /etc/salt/minion configuration file, most of this is standard salt stack configuration management. However, you may have noticed a part that looks a bit different.

{% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above is an example of using jinja inside of a state file. You can use the jinja templating in SaltStack to create complicated statements. The above will check if the grain "group" is set to group1, if it is set then it will add set the timer context to 20. If it is not set than it will default to a context of 15.

Create a template minion file

In the above salt state we told SaltStack that the salt://saltminion/minion file is a template, and that template file is a jinja template. This tells SaltStack to read the minion file and use the jinja templating language to parse it. The items under context are variables being passed to jinja while processing the file.

At this point it would probably be a good idea to actually create the template file, to do this we will start with a copy from the master server.

# cp /etc/salt/minion /srv/salt/saltminion/

Once we copy the file into the saltminion directory we will need to add the appropriate jinja markup.

# vi /srv/salt/saltminion/minion

First we will add the saltmaster variable, which will be used to tell the minions which master to connect to. In our case this will be replaced with


#master: salt

Replace with:

master: {{ saltmaster }}

After adding the master configuration, we can add the scheduler configuration to the same file. We will add the following to the bottom of the minion configuration file.


schedule: highstate: function: state.highstate minutes: {{ timer }}

In the scheduler configuration the timer variable will be replaced with either 15 or 20 depending on the group grain that is set on the minion. This will tell the minion to run a highstate every 15 or 20 minute, that should give approximately 5 minutes between groups. The timing of this may need adjustment depending on the environment. When dealing with large amounts of servers you may need to build in a larger time between highstates between the groups.

Deploying the minion config

Now that we have created the minion template file, we will need to deploy it to all of the minions. Since they don't already automatically update we can run an ad-hoc highstate from the master. Because we are restarting the minion service we may want to use --batch-size to stagger the updates.

# salt -b 10% '*' state.highstate

The above command will update all minions but only 10% of them at a time.

Using cron on the minions to schedule updates

An alternative to using SaltStacks scheduler is cron, the cron service was the default answer for scheduling highstates before the scheduler system was added into SaltStack. Since we are deploying a configuration to the minions to manage highstates, we can use salt to automate and managed this.

Creating the state file

Like with the scheduler option we will create a saltminion directory within the /srv/salt directory.

# mkdir -p /srv/salt/saltminion Create the SLS file

There are a few ways you can create crontabs in salt, but I personally like just putting a file in /etc/cron.d as it makes the management of the crontab as simple as managing any other file in salt. The below SLS file will deploy a templated file /etc/cron.d/salt-highstate to all of the minions.

# vi /srv/salt/saltminion/init.sls


/etc/cron.d/salt-highstate: file.managed: - source: salt://saltminion/salt-highstate - user: root - group: root - mode: 640 - template: jinja - context: updategroup: {{ grains['group'] }} Create the cron template

Again we are using template files and jinja to determine which crontab entry should be used. We are however performing this a little differently. Rather than putting the logic into the state file, we are putting the logic in the source file salt://saltminion/salt-highstate and simply passing the grains['group'] value to the template file in the state configuration.

# vi /srv/salt/saltminion/salt-highstate


{{ if "group1" in grains['group'] }} */20 * * * * root /usr/bin/salt-call state.highstate {{ else }} */15 * * * * root /usr/bin/salt-call state.highstate {{ endif }}

One advantage of cron over salt's scheduler is that you have a bit more control of when the highstate runs. The scheduler system runs over an interval with the ability to define seconds, minutes, hours or days. Whereas cron gives you that same ability but also allows you to define complex schedules like, "only run every Sunday if it is the 15th day of the month". While that may be a bit overkill for most, some may find that the flexibility of cron makes it easier to avoid both groups updating at the same time.

Using cron on the master to schedule updates with batches

If you want to run your highstates more frequently and avoid conditions where everything gets updated at the same time. Rather than scheduling updates from the minions, one could schedule the update from the salt master. By using cron on the master, we can use the same ad-hoc salt commands as above but call them on a scheduled basis. This solution is somewhat a best of both worlds scenario. It gives you an easy way of automatically updating your hosts in different batches and it allows you to roll the update to those groups a little at a time.

To do this we can create a simple job in cron, for consistency I am going to use /etc/cron.d but this could be done via the crontab command as well.

# vi /etc/cron.d/salt-highstate


0 * * * * root /usr/bin/salt -b 10% -G group:group1 state.highstate 30 * * * * root /usr/bin/salt -b 10% -G group:group2 state.highstate

The above will run the salt command for group1 at the top of the hour every hour and the salt command for group2 at the 30th minute of every hour. Both of these commands are using a batch size of 10% which will tell salt to only update 10% of the hosts in that group at a time. While this method might have some hosts in group1 being updated while group2 is getting started, overall it is fairly safe as it ensures that the highstate is only running on at most 20% of the infrastructure at a time.

One thing I advise it to make sure that you also segregate these highstates by server role as well. If you have a cluster of 10 webservers and only 2 database servers, all of those servers are split amongst group1 and group2; with the right timing both databases could be selected for a highstate at the same time. To avoid this you could either have your "group" grains be specific to the server roles or setup nodegroups that are specific to server roles.

An example of this would look like the following.

0 * * * * root /usr/bin/salt -b 10% -N webservers1 state.highstate 15 * * * * root /usr/bin/salt -b 10% -N webservers2 state.highstate 30 * * * * root /usr/bin/salt -b 10% -N alldbservers state.highstate

This article should give you a pretty good jump start on staggering highstates, or really any other salt function you want to perform. If you have implemented this same thing in another way I would love to hear it, feel free to drop your examples in the comments.

Originally Posted on Go To Article
Categories: FLOSS Project Planets

Creating jigsaw of image using gimp

Sat, 2014-05-17 08:19
Using GIMP any photo can be made to look like a jigsaw puzzle easily. Let us take the following image for an example.

Launch gimp and click on


Browse to location where the image is stored and open it in gimp.

Now click on


The following window will appear.

The number of tiles option lets us choose the number of horizontal and vertical divisions that are needed for the image, higher these numbers more the number of jigsaw pieces will appear on the image.

Bevel width allows us to choose the degree of slope of each pieces's edge

Highlight lets us choose how strongly should the pieces appear in the image.

Style of the jigsaw pieces can be set to either square or curved.

Click on OK after setting the required options.

The image should appear as below depending on what values were set in the options.

Categories: FLOSS Project Planets

Thank You India for making Narendra Modi as Prime Minister of India

Fri, 2014-05-16 22:05
Hello All, Firstly I would like to Congratulate Shri Narendra Modi for becoming Next Prime Minister of India. This Election was special, tough and very Surprising for not only Political parties but also People of India. Specially This election was toughest for BJP as BJP was struggling to get into the power from Last 10 […]
Categories: FLOSS Project Planets

CiviCRM – Open Constituent Management for Organisations

Thu, 2014-05-15 08:11

For our civic clients, the CRM of choice is CiviCRM – a free constituent management system specifically designed with organisations in mind.

CiviCRM in a nutshell

CiviCRM is essentially a lightweight relations management system, designed to easily integrate with organisations’ existing platforms, such as Drupal, WordPress or Joomla.

Vanilla CiviCRM offers the following list of features out-of-the-box:

  • Contact management
  • Contributions
  • Communications
  • Peer-To-Peer Fundraisers
  • Advocacy Campaigns
  • Events
  • Members
  • Reports
  • Case Management

In addition to this robust set of offerings, the CiviCRM community have developed and published upwards of a 100 extensions for their own needs. See full list at

How can my organisation benefit from CiviCRM?

CRM in the traditional sense is focused on serving the needs of paying customers, where the main focus tends to shift towards money-related aspects in the relationship. This may not be an optimal solution for a non-profit organisation. As a piece of software born out of necessity, CiviCRM is custom built for the special needs of non-profits and organisations.

CiviCRM can be utilised as a monolithic solution for most resource management and communication, or it can be simply put to use as a single-purpose component i.e. as a contact database or a mailing list. The beauty of CiviCRM lies in its easy implementation and expandability.

Being free and open source software, CiviCRM users also enjoy all the benefits of free software such as rapid feature development rate, frequent updates, good documentation and no licensing fees. Click here to learn more about the benefits of open software.

As far as we know, Seravo is currently the only company in Finland with professional CiviCRM experience. We have clients who have benefited from using CiviCRM for the last 5 years. Currently the majority of the Finnish localisation project is authored by our staff.

CiviCRM dashboard in Finnish

Learn more

Visit for more information.


We’ve recently published a presentation on CiviCRM in Finnish:

Categories: FLOSS Project Planets

To Serve Users

Wed, 2014-05-14 16:00

(Spoiler alert: spoilers regarding a 1950s science fiction short story that you may not have read appear in this blog post.)

Mitchell Baker announced today that Mozilla Corporation (or maybe Mozilla Foundation? She doesn't really say…) will begin implementing proprietary software by default in Firefox at the behest of wealthy and powerful media companies. Baker argues this serves users: that Orwellian phrasing caught my attention most.

In the old science fiction story, To Serve Man (which later was adapted for the The Twilight Zone), aliens come to earth and freely share various technological advances, and offer free visits to the alien world. Eventually, the narrator, who remains skeptical, begins translating one of their books. The title is innocuous, and even well-meaning: To Serve Man. Only too late does the narrator realize that the book isn't about service to mankind, but rather — a cookbook.

It's in the same spirit that Baker seeks to serve Firefox's users up on a platter to the MPAA, the RIAA, and like-minded wealthy for-profit corporations. Baker's only defense appears to be that other browser vendors have done the same, and cites specifically for-profit companies such as Apple, Google, and Microsoft.

Theoretically speaking, though, the Mozilla Foundation is supposed to be a 501(c)(3) non-profit charity which told the IRS its charitable purpose was: to keep the Internet a universal platform that is accessible by anyone from anywhere, using any computer, and … develop open-source Internet applications. Baker fails to explain how switching Firefox to include proprietary software fits that mission. In fact, with a bit of revisionist history, she says that open source was merely an “approach” that Mozilla Foundation was using, not their mission.

Of course, Mozilla Foundation is actually a thin non-profit shell wrapped around a much larger entity called the Mozilla Corporation, which is a for-profit company. I have always been dubious about this structure, and actions like this that make it obvious that “Mozilla” is focused on being a for-profit company, competing with other for-profit companies, rather than a charity serving the public (at least, in the way that I mean “serving”).

Meanwhile, I greatly appreciate that various Free Software communities maintain forks and/or alternative wrappers around many web browser technologies, which, like Firefox, succumb easily to for-profit corporate control. This process (such as Debian's iceweasel fork and GNOME's ephiphany interface to Webkit) provide an nice “canary in the coalmine” to confirm there is enough software-freedom-respecting code still released to make these browsers usable by those who care about software freedom and reject the digital restrictions management that Mozilla now embraces. OTOH, the one item that Baker is right about: given that so few people oppose proprietary software, there soon may not be much of a web left for those of us who stand firmly for software freedom. Sadly, Mozilla announced today their plans to depart from curtailing that distopia and will instead help accelerate its onset.

Related Links:

Categories: FLOSS Project Planets

Federal Appeals Court Decision in Oracle v. Google

Sat, 2014-05-10 09:33

[ Update on 2014-05-13: If you're more of a listening rather than reading type, you might enjoy the Free as in Freedom oggcast that Karen Sandler and I recorded about this topic. ]

I have a strange relationship with copyright law. Many copyright policies of various jurisdictions, the USA in particular, are draconian at best and downright vindictive at worst. For example, during the public comment period on ACTA, I commented that I think it's always wrong, as a policy matter, for copyright infringement to carry criminal penalties.

That said, much of what I do in my work in the software freedom movement is enforcement of copyleft: assuring that the primary legal tool, which defends the freedom of the Free Software, functions properly, and actually works — in the real world — the way it should.

As I've written about before at great length, copyleft functions primarily because it uses copyright law to stand up and defend the four freedoms. It's commonly called a hack on copyright: turning the copyright system which is canonically used to restrict users' rights, into a system of justice for the equality of users.

However, it's this very activity that leaves me with a weird relationship with copyright. Copyleft uses the restrictive force of copyright in the other direction, but that means the greater the negative force, the more powerful the positive force. So, as I read yesterday the Federal Circuit Appeals Court's decision in Oracle v. Google, I had that strange feeling of simultaneous annoyance and contentment. In this blog post, I attempt to state why I am both glad for and annoyed with the decision.

I stated clearly after Alsup's decision NDCA decision in this case that I never thought APIs were copyrightable, nor does any developer really think so in practice. But, when considering the appeal, note carefully that the court of appeals wasn't assigned the general job of considering whether APIs are copyrightable. Their job is to figure out if the lower court made an error in judgment in this particular case, and to discern any issues that were missed previously. I think that's what the Federal Circuit Court attempted to do here, and while IMO they too erred regarding a factual issue, I don't think their decision is wholly useless nor categorically incorrect.

Their decision is worth reading in full. I'd also urge anyone who wants to opine on this decision to actually read the whole thing (which so often rarely happens in these situations). I bet most pundits out there opining already didn't read the whole thing. I read the decision as soon as it was announced, and I didn't get this post up until early Saturday morning, because it took that long to read the opinion in detail, go back to other related texts and verify some details and then write down my analysis. So, please, go ahead, read it now before reading this blog post further. My post will still be here when you get back. (And, BTW, don't fall for that self-aggrandizing ballyhoo some lawyers will feed you that only they can understand things like court decisions. In fact, I think programmers are going to have an easier time reading decisions about this topic than lawyers, as the technical facts are highly pertinent.)

Ok, you've read the decision now? Good. Now, I'll tell you what I think in detail: (As always, my opinions on this are my own, IANAL and TINLA and these are my personal thoughts on the question.)

The most interesting thing, IMO, about this decision is that the Court focused on a fact from trial that clearly has more nuance than they realize. Specifically, the Court claims many times in this decision that Google conceded that it copied the declaring code used in the 37 packages verbatim (pg 12 of the Appeals decision).

I suspect the Court imagined the situation too simply: that there was a huge body of source code text, and that Google engineers sat there, simply cutting-and-pasting from Oracle's code right into their own code for each of the 7,000 lines or so of function declarations. However, I've chatted with some people (including Mark J. Wielaard) who are much more deeply embedded in the Free Software Java world than I am, and they pointed out it's highly unlikely anyone did a blatant cut-and-paste job to implement Java's core library API, for various reasons. I thus suspect that Google didn't do it that way either.

So, how did the Appeals Court come to this erroneous conclusion? On page 27 of their decision, they write: Google conceded that it copied it verbatim. Indeed, the district court specifically instructed the jury that ‘Google agrees that it uses the same names and declarations’ in Android. Charge to the Jury at 10. So, I reread page 10 of the final charge to the jury. It actually says something much more verbose and nuanced. I've pasted together below all the parts where the Alsup's jury charge mentions this issue (emphasis mine): Google denies infringing any such copyrighted material … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. … The copyrighted Java platform has more than 37 API packages and so does the accused Android platform. As for the 37 API packages that overlap, Google agrees that it uses the same names and declarations but contends that its line-by-line implementations are different … Google agrees that the structure, sequence and organization of the 37 accused API packages in Android is substantially the same as the structure, sequence and organization of the corresponding 37 API packages in Java. Google states, however, that the elements it has used are not infringing … With respect to the API documentation, Oracle contends Google copied the English-language comments in the registered copyrighted work and moved them over to the documentation for the 37 API packages in Android. Google agrees that there are similarities in the wording but, pointing to differences as well, denies that its documentation is a copy. Google further asserts that the similarities are largely the result of the fact that each API carries out the same functions in both systems.

Thus, in the original trial, Google did not admit to copying of any of Oracle's text, documentation or code (other than the rangeCheck thing, which is moot on the API copyrightability issue). Rather, Google said two separate things: (a) they did not copy any material (other than rangeCheck), and (b) admitted that the names and declarations are the same, not because Google copied those names and declarations from Oracle's own work, but because they perform the same functions. In other words, Google makes various arguments of why those names and declarations look the same, but for reasons other than “mundane cut-and-paste copying from Oracle's copyrighted works”.

For we programmers, this is of course a distinction without any difference. Frankly, programmers, when we look at this situation, we'd make many obvious logical leaps at once. Specifically, we all think APIs in the abstract can't possibly be copyrightable (since that's absurd), and we work backwards from there with some quick thinking, that goes something like this: it doesn't make sense for APIs to be copyrightable because if you explain to me with enough detail what the API has to, such that I have sufficient information to implement, my declarations of the functions of that API are going to necessarily be quite similar to yours — so much so that it'll be nearly indistinguishable from what those function declarations might look like if I cut-and-pasted them. So, the fact is, if we both sit down separately to implement the same API, well, then we're likely going to have two works that look similar. However, it doesn't mean I copied your work. And, besides, it makes no sense for APIs, as a general concept, to be copyrightable so why are we discussing this again?0

But this is reasoning a programmer can love but the Courts hate. The Courts want to take a set of laws the legislature passed, some precedents that their system gave them, along with a specific set of facts, and then see what happens when the law is applied to those facts. Juries, in turn, have the job of finding which facts are accurate, which aren't, and then coming to a verdict, upon receiving instructions about the law from the Court.

And that's right where the confusion began in this case, IMO. The original jury, to start with, likely had trouble distinguishing three distinct things: the general concept of an API, the specification of the API, and the implementation of an API. Plus, they were told by the judge to assume API's were copyrightable anyway. Then, it got more confusing when they looked at two implementations of an API, parts of which looked similar for purely mundane technical reasons, and assumed (incorrectly) that textual copying from one file to another was the only way to get to that same result. Meanwhile, the jury was likely further confused that Google argued various affirmative defenses against copyright infringement in the alternative.

So, what happens with the Appeals Court? The Appeals court, of course, has no reason to believe the finding of fact of the jury is wrong, and it's simply not the appeals court's job to replace the original jury's job, but to analyze the matters of law decided by the lower court. That's why I'm admittedly troubled and downright confused that the ruling from the Appeals court seems to conflate the issue of literal copying of text and similarities in independently developed text. That is a factual issue in any given case, but that question of fact is the central nuance to API copyrightiable and it seems the Appeals Court glossed over it. The Appeals Court simply fails to distinguish between literal cut-and-paste copying from a given API's implementation and serendipitous similarities that are likely to happen when two API implementations support the same API.

But that error isn't the interesting part. Of course, this error is a fundamental incorrect assumption by the Appeals Court, and as such the primary ruling are effectively conclusions based on a hypothetical fact pattern and not the actual fact pattern in this case. However, after poring over the decision for hours, it's the only error that I found in the appeals ruling. Thus, setting the fundamental error aside, their ruling has some good parts. For example, I'm rather impressed and swayed by their argument that the lower court misapplied the merger doctrine because it analyzed the situation based on the decisions Google had with regard to functionality, rather than the decisions of Sun/Oracle. To quote: We further find that the district court erred in focusing its merger analysis on the options available to Google at the time of copying. It is well-established that copyrightability and the scope of protectable activity are to be evaluated at the time of creation, not at the time of infringement. … The focus is, therefore, on the options that were available to Sun/Oracle at the time it created the API packages.

Of course, cropping up again in that analysis is that same darned confusion the Court had with regard to copying this declaration code. The ruling goes on to say: But, as the court acknowledged, nothing prevented Google from writing its own declaring code, along with its own implementing code, to achieve the same result.

To go back to my earlier point, Google likely did write their own declaring code, and the code ended up looking the same as the other code, because there was no other way to implement the same API.

In the end, Mark J. Wielaard put it best when he read the decision, pointing out to me that the Appeals Court seemed almost angry that the jury hung on the fair use question. It reads to me, too, like Appeals Court is slyly saying: the right affirmative defense for Google here is fair use, and that a new jury really needs to sit and look at it.

My conclusion is that this just isn't a decision about the copyrightable of APIs in the general sense. The question the Court would need to consider to actually settle that question would be: “If we believe an API itself isn't copyrightable, but its implementation is, how do we figure out when copyright infringement has occurred when there are multiple implementations of the same API floating around, which of course have declarations that look similar?” But the court did not consider that fundamental question, because the Court assumed (incorrectly) there was textual cut-and-paste copying. The decision here, in my view, is about a more narrow, hypothetical question that the Court decided to ask itself instead: “If someone textually copies parts of your API implementation, are merger doctrine, scènes à faire, and de minimis affirmative defenses like to succeed?“ In this hypothetical scenario, the Appeals Court claims “such defenses rarely help you, but a fair use defense might help you”.

However, on this point, in my copyleft-defender role, I don't mind this decision very much. The one thing this decision clearly seems to declare is: “if there is even a modicum of evidence that direct textual copying occurred, then the alleged infringer must pass an extremely high bar of affirmative defense to show infringement didn't occur”. In most GPL violation cases, the facts aren't nuanced: there is always clearly an intention to incorporate and distribute large textual parts of the GPL'd code (i.e., not just a few function declarations). As such, this decision is probably good for copyleft, since on its narrowest reading, this decision upholds the idea that if you go mixing in other copyrighted stuff, via copying and distribution, then it will be difficult to show no copyright infringement occurred.

OTOH, I suspect that most pundits are going to look at this in an overly contrasted way: NDCA said API's aren't copyrightable, and the Appeals Court said they are. That's not what happened here, and if you look at the situation that way, you're making the same kinds of oversimplications that the Appeals Court seems to have erroneously made.

The most positive outcome here is that a new jury can now narrowly consider the question of fair use as it relates to serendipitous similarity of multiple API function declaration code. I suspect a fresh jury focused on that narrow question will do a much better job. The previous jury had so many complex issues before them, I suspect that they were easily conflated. (Recall that the previous jury considered patent questions as well.) I've found that people who haven't spent their lives training (as programmers and lawyers have) to delineate complex matters and separate truly unrelated issues do a poor job at such. Thus, I suspect the jury won't hang the second time if they're just considering the fair use question.

Finally, with regard to this ruling, I suspect this won't become immediate, frequently cited precedent. The case is remanded, so a new jury will first sit down and consider the fair use question. If that jury finds fair use and thus no infringement, Oracle's next appeal will be quite weak, and the Appeals Court likely won't reexamine the question in any detail. In that outcome, very little has changed overall: we'll have certainty that API's aren't copyrightable, as long as any textual copying that occurs during reimplementation is easily called fair use. By contrast, if the new jury rejects Google's fair use defense, I suspect Google will have to appeal all the way to SCOTUS. It's thus going to be at least two years before anything definitive is decided, and the big winners will be wealthy litigation attorneys — as usual.

0This is of course true for any sufficiently simple programming task. I used to be a high-school computer science teacher. Frankly, while I was successful twice in detecting student plagiarism, it was pretty easy to get false positives sometimes. And certainly I had plenty of student programmers who wrote their function declarations the same for the same job! And no, those weren't the students who plagiarized.

Categories: FLOSS Project Planets