Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 14 hours 6 min ago

Be Sure to Comment on FCC’s NPRM 14-28

Wed, 2014-06-04 10:20

I remind everyone today, particularly USA Citizens, to be sure to comment on the FCC's Notice of Proposed Rulemaking (NPRM) 14-28. They even did a sane thing and provided an email address you can write to rather than using their poorly designed web forums, but PC Magazine published relatively complete instructions for other ways. The deadline isn't for a while yet, but it's worth getting it done so you don't forget. Below is my letter in case anyone is interested.

Dear FCC Commissioners,

I am writing in response to NPRM 14-28 — your request for comments regarding the “Open Internet”.

I am a trained computer scientist and I work in the technology industry. (I'm a software developer and software freedom activist.) I have subscribed to home network services since 1989, starting with the Prodigy service, and switching to Internet service in 1991. Initially, I used a PSTN single-pair modem and eventually upgraded to DSL in 1999. I still have a DSL line, but it's sadly not much faster than the one I had in 1999, and I explain below why.

In fact, I've watched the situation get progressively worse, not better, since the Telecommunications Act of 1996. While my download speeds are little bit faster than they were in the late 1990s, I now pay substantially more for only small increases of upload speeds, even in a major urban markets. In short, it's become increasingly more difficult to actually purchase true Internet connectivity service anywhere in the USA. But first, let me explain what I mean by “true Internet connectivity”.

The Internet was created as a peer-to-peer medium where all nodes were equal. In the original design of the Internet, every device has its own IP address and, if the user wanted, that device could be addressed directly and fully by any other device on the Internet. For its part, the network in between the two nodes were intended to merely move the packets between those nodes as quickly as possible — treating all those packets the same way, and analyzing those packets only with publicly available algorithms that everyone agreed were correct and fair.

Of course, the companies who typically appeal to (or even fight) the FCC want the true Internet to simply die. They seek to turn the promise of a truly peer-to-peer network of equality into a traditional broadcast medium that they control. They frankly want to manipulate the Internet into a mere television broadcast system (with the only improvement to that being “more stations”).

Because of this, the three following features of the Internet — inherent in its design — that are now extremely difficult for individual home users to purchase at reasonable cost from so-called “Internet providers” like Time Warner, Verizon, and Comcast:

  • A static IP address, which allows the user to be a true, equal node on the Internet. (And, related: IPv6 addresses, which could end the claim that static IP addresses are a precious resource.)
  • An unfiltered connection, that allows the user to run their own webserver, email server and the like. (Most of these companies block TCP ports 80 and 25 at the least, and usually many more ports, too).
  • Reasonable choices between the upload/download speed tradeoff.

For example, in New York, I currently pay nearly $150/month to an independent ISP just to have a static, unfiltered IP address with 10 Mbps down and 2 Mbps up. I work from home and the 2 Mbps up is incredibly slow for modern usage. However, I still live in the Slowness because upload speeds greater than that are extremely price-restrictive from any provider.

In other words, these carriers have designed their networks to prioritize all downloading over all uploading, and to purposely place the user behind many levels of Network Address Translation and network filtering. In this environment, many Internet applications simply do not work (or require complex work-arounds that disable key features). As an example: true diversity in VoIP accessibility and service has almost entirely been superseded by proprietary single-company services (such as Skype) because SIP, designed by the IETF (in part) for VoIP applications, did not fully anticipate that nearly every user would be behind NAT and unable to use SIP without complex work-arounds.

I believe this disastrous situation centers around problems with the Telecommunications Act of 1996. While the ILECs are theoretically required to license network infrastructure fairly at bulk rates to CLECs, I've frequently seen — both professional and personally — wars waged against CLECs by ILECs. CLECs simply can't offer their own types of services that merely “use” the ILECs' connectivity. The technical restrictions placed by ILECs force CLECs to offer the same style of service the ILEC offers, and at a higher price (to cover their additional overhead in dealing with the CLECs)! It's no wonder there are hardly any CLECs left.

Indeed, in my 25 year career as a technologist, I've seen many nasty tricks by Verizon here in NYC, such as purposeful work-slowdowns in resolution of outages and Verizon technicians outright lying to me and to CLEC technicians about the state of their network. For my part, I stick with one of the last independent ISPs in NYC, but I suspect they won't be able to keep their business going for long. Verizon either (a) buys up any CLEC that looks too powerful, or, (b) if Verizon can't buy them, Verizon slowly squeezes them out of business with dirty tricks.

The end result is that we don't have real options for true Internet connectivity for home nor on-site business use. I'm already priced out of getting a 10 Mbps upload with a static IP and all ports usable. I suspect within 5 years, I'll be priced out of my current 2 Mbps upload with a static IP and all ports usable.

I realize the problems that most users are concerned about on this issue relate to their ability to download bytes from third-party companies like Netflix. Therefore, it's all too easy for Verizon to play out this argument as if it's big companies vs. big companies.

However, the real fallout from the current system is that the cost for personal Internet connectivity that allows individuals equal existence on the network is so high that few bother. The consequence, thus, is that only those who are heavily involved in the technology industry even know what types of applications would be available if everyone had a static IP with all ports usable and equal upload and download speeds of 10 Mbs or higher.

Yet, that's the exact promise of network connectivity that I was taught about as an undergraduate in Computer Science in the early 1990s. What I see today is the dystopian version of the promise. My generation of computer scientists have been forced to constrain their designs of Internet-enabled applications to fit a model that the network carriers dictate.

I realize you can't possibly fix all these social ills in the network connectivity industry with one rule-making, but I hope my comments have perhaps given a slightly different perspective of what you'll hear from most of the other commenters on this issue. I thank you for reading my comments and would be delighted to talk further with any of your staff about these issues at your convenience.


Bradley M. Kuhn,
a citizen of the USA since birth, currently living in New York, NY.

Categories: FLOSS Project Planets

A U-Boot Independent Standalone Application

Wed, 2014-06-04 03:45

U-Boot allows you to load your own applications at the console. The application already has the hardware interfaces available for use (u-boot does it), and everything does not need to be brought up from scratch.

It comes with a sample hello_world program at u-boot/examples/standalone/hello_world.c, which is supposed to print stuff to console. It depends on U-Boot interfaces, but by tracing back the source code, it can be easily re-written to have nothing to do with the U-Boot API.

In the end, hello_world.c:printf()’s job is to write the characters to UART’s address. Implementing this on a BeagleBone Black is pretty easy:

The ARM AM335x TRM mentions the address-offsets of all registers available with the processor. The UART0_BASE is defined at 0x44E09000. Memory mapped registers need to be kept volatile to prevent compiler optimizing them away.

Here’s the code:

Compile it similarly to U-Boot’s examples to get a .bin.

To execute it, you can either: 1) copy the SREC over serial, 2) set up TFTP or 3) put it on an sdcard

The load_address below is an env var that specifies where the application will be loaded. It can be changed when building U-Boot.
For the current build, find it from the console using

U-Boot# printenv U-Boot# fatload mmc 0 <load_address> [...]

The entry point’s address of the application can be found by looking at the objdump. See this if you have larger applications.
Launching it,

U-Boot# go <entry_point_address>

If you plan to write a purely standalone binary, you are required to initialize the hardware manually and provide a functioning C execution environment. It also requires information regarding placement and relocation of the text and data segments, allocation and zeroing of BSS segment and placement of the stack space. See this and this.

Categories: FLOSS Project Planets

HOWTO – Setup Claws-mail the best way

Tue, 2014-06-03 10:53

When you install claws-mail, you will only see text messages for all mails you have and will get. You need to load a plugin that will let you show mails as they were excpected.

This is how it looks like from beginning

Open Configuration inside claws-mail, choose Plugins

Click on load and find file.

Then click on Close. Open Configuration again, click on Preferences, find Message View and click on Text options.

Make sure you choose “Select the HTML part of Multipart/alternative messages.

Then in same preferences, go to Plugins and highlight Fancy.

There you can choose to “Enable loading of remote content and Display images.

Then click on Apply and Ok.

Now your mails will look normal again.

This is the same mail as the first image in this post.

So now you got a great mail application and the mails look sane.

To install it in Foresight, open terminal and write:

sudo conary install claws-mail

I also recommend to load bogofilter and Notification from plugins. As bogofilter will help you against spam and notification will notify you with a popup window.

Categories: FLOSS Project Planets

Different ways to get help with Foresight

Tue, 2014-06-03 06:24

There are few ways to get help with Foresight and get help with various issues you might hit.


  • JIRA – Official bug tracker

The problem can be that some users never knows when a issue is reported, if you don’t visit the tracker or subscribe on new entries.


  • Irc – Realtime chatting

A great way to get help right away, but also depends on users are there and awake.


Still kinda new way to get help. But works real fine and also visible in more places and users can see that someone asked something. Easy to follow from RSS feed too.


Probably the easiest way to get help, as it probably sends a notification to users that have “notify on” inside that group.


  • Forum – Unofficial forum

Those who still want to use forum and write all kinds of stuff.


Mailinglist – E-mail to our mailinglists

Some users might want to send a mail instead. Harder to follow for users that doesn’t subscribe on it though.


We always recommend Jira in first place, but you can also post that link from Jira in ask foresight or G+ to get users to notify about it.

Common questions, we recommend ask foresight, G+ or Forum.

We usually test packages and talks about development in irc. Mostly in devel channel. We use mailinglist for keep on track on ongoing development.

Fastest way to get help, use G+ or ask foresight for non critical issues.

I might be wrong in some stuff here, but this is my personal feeling about ways to get help and how fast we notice about the problem/issue.

Categories: FLOSS Project Planets


Mon, 2014-06-02 22:19
Lostnbronx has an idea for a piece of software, but doubts it will ever come into existence.
Categories: FLOSS Project Planets

Tablet vs. Laptop vs. Combo

Fri, 2014-05-30 17:32
So, my latest personal technology debate.  Which should be the next system that I invest in for my family members consumption.  Let me state that if I had more money than sense, I would invest for everyone (4 of us) to have Apple products across the board, but that option is way too expensive.  As well the two children should be taking theirs to public school and a loss of that kind of value would be crushing.

As  my primary goal, these new devices should replace older machines and multiple under-powered devices.  In an attempt to consolidate for the kids onto a specific device/platform by which they could accomplish school work.  That choice being more than adequate for their mother who does email and navigation mostly, and for myself something grander than a smartphone, but would clearly be under-powered for vitalization or some of the other purposes that require I have a modern desktop performance system.

Gaming is not coming into the equation at all.  This also drops some of the performance requirements.  I can add that I have an affinity for Linux over all other operating systems for the management of security. As I have mentioned before, Apple would be my mainstay if not so expensive, so I do not shy away from the most productive choice just because it is not my ideal.  In this comparison there are devices which are Win8, Android, and basic systems I would install Linux on.

So I started a quest with a budget.  Since that is probably the most limiting factor, let's discuss it as secondary.  €400-500 per person is what my target is.  This puts me in the high end tablet range or low end laptop.  My first inclination was to go towards the tablet, so that it would be as portable as possible with the longest battery life.  But I personally need to be able to use a hardware keyboard.  So, I know that a bluetooth keyboard can be added to most any device, I didn't want to have yet another thing to try and keep charged as a separate thing.  This has all gotten me down to looking at the following devices closely.

Asus Transformer Pad TF701T (Android 4.3)
Microsoft Surface 2 (Win8 RT)
Samsung Galaxy Note 10.1 (2014 edition) (Android 4.4)
Lenovo Miix 2 10  (Win8.1)
Asus Transformer Book T100 (Win8.1)

Reviewing point I used were apps, operating system, size, ports and expandability.  Due to some software limitations (like proper office suite missing for Android as well as limitations for Windows 8 RT to windows store only) leaned me towards to the new entries in the "convertible" space.  The Asus Transformer Book T100 and the Lenovo Miix 2 10.  With very similar specs, I've made a short list of the actual differences.

Lenovo Miix 2 10    (+full hd screen, -non standard micro-usb charger, - usb 2.0 only)
+Full HD screen
+rear camera
+larger kb keys
- kb numbers out of alignment
- magnet attach, with 3 fixed positions (no flexible hinge)

Asus Transformer Book T100 (Win8.1)
-HD screen-kb a 95% like netbook
-no rear camera
+physical locking screen to dock
+kb #'s align

I had hoped to find a device that also had built in 3G.  It would seem that that module included on devices in my price range are not common at all.

The Asus came in a bit cheaper, but not really enough to polarize my decision.  So based on these few differences between the two systems I made the choice to go with the Transformer Book T100 based on these points:

  1. USB 3.0.  No matter how much I try, I will never be able to make USB 2.0 ports become a USB 3.0 port.
  2. I found a cover for the T100 that can let it function in all the best tablet modes (stand horizontal and landscape) that does NOT block the normal keyboard dock section of the tablet. Coodio Smart Asus Transformer Book T100TA.
  3. Linux adaptation has already started.
  4. Physical latching mechanism for the keyboard to attach.
I'll bet getting the devices very soon as I have ordered them just recently.  I've been watching Win8 "howto" videos on youtube to bone up on the UI as I have not operated a Win8 machine as a daily driver yet.  I'll post back here again after they arrive to follow up on my impression with the machine after some use.

This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 United States License.
Categories: FLOSS Project Planets

Latest Firefox versions available

Thu, 2014-05-29 09:59

I’ve been building Firefox 29.0.1 and 30.0b8 in foresighters repo.

To install any of them,  open terminal and write:

32bit sudo conary install

or for beta

sudo conary install 64bit sudo conary install

or for beta

sudo conary install
Categories: FLOSS Project Planets

Configure and connect tata photon plus in debian 7

Tue, 2014-05-27 04:12
The 3G dongle of tata photon+ (Huwaei EC1260) might not work by default in debian 7, and other linux distros. Here is how to get it working.

The first step is to install the packages, if not already installed.

usb-modeswitch usb-modeswitch-data

sudo dpkg -i usb-modeswitch sudo dpkg -i usb-modeswitch-data

Once the two packages have been installed successfully, next we need to install wvdial

sudo dpkg -i wvdial

After installing the pakcage, insert the dongle.

Now open a terminal and type the command

$ sudo wvdialconf /etc/wvdial.conf

We will see an output as below

Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. Modem Port Scan<*1>: S0 S1 S2 S3 ttyUSB0<*1>: ATQ0 V1 E1 -- OK ttyUSB0<*1>: ATQ0 V1 E1 Z -- OK ttyUSB0<*1>: ATQ0 V1 E1 S0=0 -- ATQ0 V1 E1 S0=0 ttyUSB0<*1>: ATQ0 V1 E1 &C1 -- OK ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 -- OK ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- OK ttyUSB0<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB0<*1>: Speed 9600: AT -- OK ttyUSB0<*1>: Max speed is 9600; that should be safe. ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- ATQ0 V1 E1 &C1 &D2 +FCLASS=0 ttyUSB0<*1>: failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 -- OK ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB0. Modem configuration written to /etc/wvdial.conf. ttyUSB0: Speed 9600; init "ATQ0 V1 E1 &C1 &D2 +FCLASS=0" ttyUSB2: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0

The fourth line from below recognizes our dongle as the device /dev/ttyUSB0

Next we need to configure the wvdial application to allow it use the dongle to connect to the internet.

For doing this we need to open the wvdial.conf file located in /etc/wvdial.conf

$ gedit /etc/wvdial.conf’

By default the file will look some thing like below

[Dialer Defaults] Init1 = ATZ Init2 = ATQ0 V1 E1 &C1 &D2 +FCLASS=0 Modem Type = Analog Modem ISDN = 0 New PPPD = yes ; Phone = < Target phone number> Modem = /dev/ttyUSB0 ;Username = ;Password = Baud = 9600

Delete the ";" at the beginning of the lines "Phone","Username", and "Password". and replace the three lines with

Phone = #777 Username = internet Password = internet

Then add the following lines to the file at the end.

Init3 = AT+CRM=1 Stupid Mode = 1

The edited wvdial.conf looks as below.

[Dialer Defaults] Init1 = ATZ Init2 = ATQ0 V1 E1 &C1 &D2 +FCLASS=0 Modem Type = Analog Modem ISDN = 0 New PPPD = yes Phone = #777 Modem = /dev/ttyUSB0 Username = internet Password = internet Baud = 9600 Init3 = AT+CRM=1 Stupid Mode = 1

Save the file and close it. Now go back to the terminal and type the command

$ wvdial --> WvDial: Internet dialer version 1.61 --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT ^HRSSILVL:80 ~[7f]}#@!}!})} }9}"}&} } } } }#}%B#}%}%}&@}*vO}'}"}(}"m.~~[7f]}#@!}!}*} }9}"}&} } } } }#}%B#}%}%}&@}*vO}'}"}(}"!C~ --> Carrier detected. Starting PPP immediately. --> Starting pppd at Tue May 27 14:06:08 2014 --> Pid of pppd: 13443 --> Using interface ppp0 --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> pppd: (��[08]@��[08][08]��[08] --> local IP address --> pppd: (��[08]@��[08][08]��[08] --> remote IP address --> pppd: (��[08]@��[08][08]��[08] --> primary DNS address --> pppd: (��[08]@��[08][08]��[08] --> secondary DNS address --> pppd: (��[08]@��[08][08]��[08]

Now open a browser and start browsing.

Note that before running the command wvdial in the last step, turn off mobile broadband in the gui mode.
Categories: FLOSS Project Planets

Transferring a .bin from openSUSE to U-Boot, or, Rightly Configuring TFTP on openSUSE

Mon, 2014-05-26 12:37

After spending a *lot* of time figuring out why I could not transfer a standalone binary to u-boot running on my BeagleBone Black, I finally discovered it was a firewall issue. This post is to save anyone in the future from suffering the same nightmare as I just went through on openSUSE 13.1.

The Problem:

I needed to put a .bin on my BBB which has U-Boot. The available options are:

Transferring the S-Record

SREC is the hex encoding of binary data generated on compilation. To load this, U-Boot provides the `loads` command at its console. You just need to pass the ASCII-Hex data from the .srec to the serial console (see this). The problem is, the speed of sending this data must be okay with the U-Boot console. Gave me a `data abort` message and my board reset.

Using TFTP

Better option: tftp. Have static IP setup for the host and the board (set env vars ipaddr and serverip on u-boot) and call tftp. It gave me this:

U-Boot# tftp link up on port 0, speed 100, full duplex Using cpsw device TFTP from server; our IP address is Filename 'hello_world.bin'. Load address: 0x80200000 Loading: T T T T T T T T T T T T T T T Abort

(**T = Timeout.**)


TFTP uses UDP port 69 for transfers. I needed to explicitly check “Open port in firewall” from the TFTP server config from YaST and add port 69 to Firewall->Allowed Services->UDP Ports.


Categories: FLOSS Project Planets

X-Loader / MLO With a Custom Payload

Sun, 2014-05-25 06:34

X-Loader (MLO) (u-boot/common/spl/spl.c, responsible for eMMC initialization and executing the u-boot binary) first parses the image header info (u-boot/common/spl/spl.c:spl_parse_image_header) which effectively does this:

if (magic number of header == IH_MAGIC) { set spl_image to the detected image } else { fill in spl_image assuming u-boot.bin } ... call spl_image->entry_point()

IH_MAGIC is set to 0×27051956, the default magic number when creating an image with mkimage. This image can be called from within u-boot at the u-boot command line. By default, the SPL assumes a `uImage` payload and if not found, tries to launch u-boot.

Categories: FLOSS Project Planets

error: function declaration isn’t a prototype [-Werror=strict-prototypes]

Sun, 2014-05-25 00:55
While compiling kernel modules, or any c program, we might at times come across the error

error: function declaration isn’t a prototype [-Werror=strict-prototypes]

This error is being shown for the function declaration

void hello();

Though the syntax looks fine, the compiler is expecting us to pass "void" in case we are not passing any arguments to the function. That is we need to modify the above function to

void hello(void); After the above change, we should not get the above error.
Categories: FLOSS Project Planets

Setup Redis Failover with Redis Sentinel

Fri, 2014-05-23 16:23

Recently I’ve been playing with redis, to study as an alternative for memcached for one project. One of my main requirements was to implement a good failover solution for a key-value memory database, actually you’ve many ways to do it from code and memcached (doing get/set checking the availability of the servers) or better solution use the repcached patch for memcached, the first it’s not a clean solution at all and I was not very convinced with repcached. After get more involved in all the features that redis can offer, one of the interesting features is the persistence on disk. Redis stores all in memory key/value pairs on disk, so in case of failure after recovery you’ll get the last data stored from the last snapshot in memory. Note that the store to disk is not an operation effected on the fly, so you can lose some data, although redis offers different kind of setup to store on disk it’s important that you understand how it works. Anyway remember you’re working with a memory key-value cache solution, so it’s not the solution to make persistent critical data. A good read that I recommend to understand how persistence works:

One of the other interesting features that I really appreciate of this solution, is the possibility to work with different data structures like lists, hashes, sets and sorted sets. So you’ve more flexibility to work to store different values on cache on the same key and have a support for native data types provided from the client library of your programming language used. You can take a look there to check the different data structures used in redis:

After this small introduction of some tips that why I choose redis, now I’ll talk about the failover solution. Redis supports master-slave asynchronous replication and sentinel will provides the failover, which comes from redis version 2.6, but from the project documentation they recommend the version shipped with redis 2.8, seems like they did very important enhances with the last version. Sentinel is a distributed system, which the different processes are communicated between them over messages and using the different protocols to elect a new master and inform the address of the current master of the cluster to the client.

We’ll run sentinel in our systems as a separate daemon listening in a different port, which will communicate with the other sentinels setup on the cluster to alert in event of a node failure and choose a new master. Sentinel will change our configuration files of our servers just to attach a recovered node on the cluster (setup as slave) or promote a slave as a master. The basic process to choose a new master basically is this:

1.- One sentinel node detects a server failure on the cluster in the number of milliseconds setup in the directive “down-after-milliseconds“. At this moment this sentinel node mark this instance as subjectively down (SDOWN).

2.- When the enough number of sentinels agreed is reached about this master failure , then this node is marked as objectively down (ODOWN), then the failover trigger is processed. The number of sentinels it’s setup for master.

3.- After the trigger failover, it’s not still enough to perform a failover, since it’s subject to a quorum process and at least a majority of Sentinels must authorized the Sentinel to failover.

Basically we’ll need a minimum of three nodes in our cluster to setup our redis failover solution. In my case I choose to use two redis servers (master & slave) both running sentinel, and one third node running just sentinel for the quorum process. For more information about the failover process and sentinel you can check the official documentation:

After this basic tips about how it works redis & sentinel, we can begin with the setup. For this environment I used a total of three servers running Ubuntu 14.04. All that I need to do is install redis-server from repositories. Note if you’re using other GNU/Linux distribution or an older Ubuntu version you’ll need to compile and install by hand.

- Setup for redis sentinels (nodes 1,2,3) /etc/redis/sentinel.conf:

# port <sentinel-port> # The port that this sentinel instance will run on port 26379 daemonize yes pidfile /var/run/redis/ loglevel notice logfile /var/log/redis/redis-sentinel.log # Master setup # sentinel parallel-syncs <master-name> <numslaves> # Minimum of two sentinels to declare an ODOWN sentinel monitor mymaster 6379 2 # sentinel down-after-milliseconds <master-name> <milliseconds> sentinel down-after-milliseconds mymaster 5000 # sentinel failover-timeout <master-name> <milliseconds> sentinel failover-timeout mymaster 900000 # sentinel parallel-syncs <master-name> <numslaves> sentinel parallel-syncs mymaster 1 # Slave setup sentinel monitor resque 6379 2 sentinel down-after-milliseconds resque 5000 sentinel failover-timeout resque 900000 sentinel parallel-syncs resque 4

- Create init scripts for sentinels (nodes 1,2,3) /etc/init.d/redis-sentinel:

#! /bin/sh ### BEGIN INIT INFO # Provides: redis-sentinel # Required-Start: $syslog $remote_fs # Required-Stop: $syslog $remote_fs # Should-Start: $local_fs # Should-Stop: $local_fs # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: redis-sentinel - Persistent key-value db # Description: redis-sentinel - Persistent key-value db ### END INIT INFO PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin DAEMON=/usr/bin/redis-sentinel DAEMON_ARGS=/etc/redis/sentinel.conf NAME=redis-sentinel DESC=redis-sentinel RUNDIR=/var/run/redis PIDFILE=$RUNDIR/ test -x $DAEMON || exit 0 if [ -r /etc/default/$NAME ] then . /etc/default/$NAME fi . /lib/lsb/init-functions set -e case "$1" in start) echo -n "Starting $DESC: " mkdir -p $RUNDIR touch $PIDFILE chown redis:redis $RUNDIR $PIDFILE chmod 755 $RUNDIR if [ -n "$ULIMIT" ] then ulimit -n $ULIMIT fi if start-stop-daemon --start --quiet --umask 007 --pidfile $PIDFILE --chuid redis:redis --exec $DAEMON -- $DAEMON_ARGS then echo "$NAME." else echo "failed" fi ;; stop) echo -n "Stopping $DESC: " if start-stop-daemon --stop --retry forever/TERM/1 --quiet --oknodo --pidfile $PIDFILE --exec $DAEMON then echo "$NAME." else echo "failed" fi rm -f $PIDFILE sleep 1 ;; restart|force-reload) ${0} stop ${0} start ;; status) echo -n "$DESC is " if start-stop-daemon --stop --quiet --signal 0 --name ${NAME} --pidfile ${PIDFILE} then echo "running" else echo "not running" exit 1 fi ;; *) echo "Usage: /etc/init.d/$NAME {start|stop|restart|force-reload|status}" &gt;&amp;2 exit 1 ;; esac exit 0

- Give execution permission on the script:

# chmod +x /etc/init.d/redis-sentinel

- Start the script automatically at boot time:

# update-rc.d sentinel defaults

- Change owner & group for /etc/redis/ to allow sentinel change the configuration files:

# chown -R redis.redis /etc/redis/

- On node 3 I’ll not use redis-server, so I can remove the init script:

# update-rc.d redis-server remove

- Edit the configuration of redis server on nodes 1,2 (/etc/redis/redis.conf), with the proper setup for your project. The unique requirement to work with seninel it’s just to setup the proper ip address on bind directive. All the directives are commented on the file and are very clear, so take your time to adapt redis to your project.

- Connecting to our redis cluster:

Now we’ve our redis cluster ready to store our data. In my case I work with Perl and currently I’m using this library: which you can install using the cpan tool. Note the version coming from ubuntu repositories (libredis-perl) it’s quite old and doesn’t implement the sentinel interface, so it’s better to install the module from cpan.

So to connect to our cluster as documented on the client library I used the next chain:

my $cache = Redis->new(sentinels => [ "redis1:26379", "redis2:26379", "node3:26379" ], service => 'mymaster' );

So basically the library will tries to connect with the different sentinel servers and get the address of the current master redis servers which will get and store the data in our system.

Another solution instead to connect from our scripts to the different sentinel servers, is use haproxy as backend and as a single server connection for our clients. HaProxy should check on the different redis servers the string “role:master” and redirect all the requests to this server.

Take a look on the library documentation for your programming language used in your project. The different clients currently supported by redis are listed here:

- Sources:

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 16: Forgotten to be Right

Thu, 2014-05-22 11:25

From the Bad Voltage site:

Myself, Bryan Lunduke, Jono Bacon, and Stuart Langridge present Bad Voltage, the greatest podcast in the history of this or any other universe. In this episode:

Listen to: 1×16: Forgotten to be Right

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.


Categories: FLOSS Project Planets

How to launch .jar files using nautilus

Wed, 2014-05-21 14:52

This is not nautilus specific issue and will work in more tools (like other file manager, xdg-open in cli etc)

Create a run-jar.desktop in your ~/.local/share/applications/ directory with the following content:

[DesktopEntry] Encoding=UTF-8 Type=Application Exec=java -jar %f Icon=java Name=run-jar Name[zh_CN]=run-jar Comment=Run the jar file Comment[zh_CN]=运行 JAR 文件

Now when you open the file’s property dialog and go to open with tab, you can see run-jar mentioned in ‘show more app’.

To make run-jar the default action, use nautilus ‘set default’ button or the type the following command in a terminal:

xdg-mime query default application/x-java-archive

The mime type can be found with the command:

xdg-mime query filetype my_shiny_app.jar


There is other ways to do it too, like creating a nautilus script. But this feels like a better way though.

Categories: FLOSS Project Planets

The difference between an ‘akmod’ and ‘kmod’

Wed, 2014-05-21 14:03


A ‘kmod’ (kernel driver module) is the pre-compiled, low-level software interface between the kernel and a driver. It gets loaded (into RAM) and merged into the running kernel. Linux kmods are specific to one and only one kernel, and will not work (nor even load) for any other kernel.

Advantages: Pre-Compiled – no need to fool around with compiling, compilers, *-devel packages and other associated overhead.

Disadvantages: updating and re-booting into a new kernel without updating the kmod(s) will result in loss of functionality and inherent delays in updating kmods after kernel updates.

akmods (similar to dkms) is a solution to the problem of some kernel modules depending on specific versions of a kernel. As you start your computer, the akmod system will check if there are any missing kmods and if so, rebuild a new kmod for you. Akmods have more overhead than regular kmod packages as they require a few development tools such as gcc and automake in order to be able to build new kmods locally. If you think you’d like to try akmods, simply replace kmod with akmod

With akmod you don’t have to worry about kernel updates as it recreates the driver for the new kernel on boot. With kmod you have to wait until a matching kmod is available before installing the kernel update.

Advantages: obvious.

Disadvantages: HDD space required for compilers and *-devel packages; unforseen/uncorrectable driver problems that cannot be resolved by the automatic tools.

Categories: FLOSS Project Planets


Tue, 2014-05-20 04:52
Lostnbronx breaks a bottle of beer, and contemplates one's duty to oneself.
Categories: FLOSS Project Planets

Anyone interested to get updates for FL:2-devel ?

Mon, 2014-05-19 10:44

Hello all Foresight users.

I wonder if anyone is interested to still get some updates to current fl:2-devel repo? If so, leave a tiny comment and will put in some time to update some regular packages in near future.

As we all waiting for F20, so we are not sure how many users are left with latest Foresight today…..

Categories: FLOSS Project Planets

Using Saltstack to update all hosts, but not at the same time

Mon, 2014-05-19 00:20

Configuration management and automation tools like SaltStack are great, they allow us to deploy a configuration change to thousands of servers with out much effort. However, while these tools are powerful and give us greater control of our environment they can also be dangerous. Since you can roll out a configuration change to all of your servers at once, it is easy for that change to break all of your servers at once.

In today's article I am going to show a few ways you can run a SaltStack "highstate" across your environment, and how you can make those highstate changes a little safer by staggering when servers get updated.

Why stagger highstate runs

Let's imagine for a second that we are running a cluster of 100 webservers. For this webserver cluster we are using the following nginx state file to maintain our configuration, ensure that the nginx package is installed and the service is running.

nginx: pkg: - installed service: - running - watch: - pkg: nginx - file: /etc/nginx/nginx.conf - file: /etc/nginx/conf.d - file: /etc/nginx/globals /etc/nginx/globals: file.recurse: - source: salt://nginx/config/etc/nginx/globals - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True /etc/nginx/nginx.conf: file.managed: - source: salt://nginx/config/etc/nginx/nginx.conf - user: root - group: root - mode: 640 /etc/nginx/conf.d: file.recurse: - source: salt://nginx/config/etc/nginx/conf.d - user: root - group: root - file_mode: 644 - dir_mode: 755 - include_empty: True - makedirs: True

Now let's say you need to deploy a change to the nginx.conf configuration file. Making the change is pretty straight forward, we can simply change the source file on the master server and use salt to deploy it. Since we listed the nginx.conf file as a watched state, SaltStack will also restart the nginx service for us after changing the config file.

To deploy this change to all of our servers we can run a highstate from the master that targets every server.

# salt '*' state.highstate

One of SaltStack's strengths is the fact that it performs tasks in parallel across many minions. While that is a useful feature for performance, it can be a bit of problem when running a highstate that restarts services across all of your minions.

The above command will deploy the configuration file to each server and restart nginx on all servers. Effectively bringing down nginx on all hosts at the same time, even if it is for just a second that restart is probably going to be noticed by your end users.

To avoid situations that bring down a service across all of our hosts at the same time, we can stagger when hosts are updated.

Staggering highstates Ad-hoc highstates from the master

Initiating highstates is usually either performed ad-hoc or via a scheduled task. There are two ways to initiate an ad-hoc highstate, either via the salt-call command on the minion or the salt command on the master. Running the salt-call command on each minion manually naturally avoids the possibility of restarting services on all minions at the same time as it only affects the minion where it is run from. The salt command on the master however can if given the proper targets be told to update all hosts, or only a subset of hosts at a given time.

The most common method of calling a highstate is the following command.

# salt '*' state.highstate

Since the above command runs the highstate on all hosts in parallel this will not work for staggering the update. The below examples will cover how to use the salt command in conjunction with SaltStack features and minion organization practices that allow us to stagger highstate changes.

Batch Mode

When initiating a highstate from the master you can utilize a feature known as batch mode. The --batch-size flag allows you to specify how many minions to run against in parallel. For example, if we have 10 hosts and we want to run a highstate on all 10 but only 5 at a time. We can use the command below.

# salt --batch-size 5 '*' state.highstate

The batch size can also be specified with the -b flag. We could perform the same task with the next command.

# salt -b 5 '*' state.highstate

The above commands will tell salt to pick 5 hosts, run a highstate across those hosts and wait for them to finish before performing the same task on the next 5 hosts until it has run a highstate across all hosts connected to the master.

Specifying a percentage in batch mode

Batch size can take either a number or a percentage. Given the same scenario, if we have 10 hosts and we want to run a highstate on 5 at a time. Rather than giving the batch size of 5 we can give a batch size of 50%.

# salt -b 50% '*' state.highstate Using unique identifiers like grains, nodegroups, pillars and hostnames

Batch mode picks which hosts to update at random, you may yourself wanting to upgrade a specific set of minions first. Within SaltStack there are several options for identifying a specific minion, with some pre-planning on the organization of our minions we can use these identifiers to target specific hosts and control when/how they get updated.

Hostname Conventions

The most basic way to target a server in SaltStack is via the hostname. Choosing a good hostname naming convention is important in general but when you tie in configuration management tools like SaltStack it helps out even more (see this blog post for an example).

Let's give another example where we have 100 hosts, and we want to split our hosts into 4 groups; group1, group2, group3 and group4. Our hostname will follow the convention of webhost<hostnum>.<group> so the first host in group 1 would be

Now that we have a good naming convention if we want to roll-out our nginx configuration change and restart to these groups one by one we can do so with the following salt command.

# salt 'webhost*group1*' state.highstate

This command will only run a highstate against hosts that have a hostname that matches the 'webhost*group1*' pattern. Which means that only group1's hosts are going to be updated with this run of salt.


Sometimes you may find yourself in a situation where you cannot use the hostname to identify classes of minions and the hostnames can't easily be changed, for whatever reasons. If descriptive hostnames are not an option than one alternate solution for this is to use nodegroups. Nodegroups are an internal grouping system within SaltStack that will let you target groups of minions by a specified name.

In the example below we are going to create 2 nodegroups for a cluster of 6 webservers.

Defining a nodegroup

On the master server we will define 2 nodegroups, group1 and group2. To add these definitions we will need to change the /etc/salt/master configuration file on the master server.

# vi /etc/salt/master


##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # # group1: ',, and bl*' # group2: 'G@os:Debian and'

Modify To:

##### Node Groups ##### ########################################## # Node groups allow for logical groupings of minion nodes. # A group consists of a group name and a compound target. # group1: ', and' group2: ', and'

After modifying the /etc/salt/master we will need to restart the salt-master service

# /etc/init.d/salt-master restart Targeting hosts with nodegroups

With our nodegroups defined we can now target our groups of minions by passing the -N <groupname> arguments to the salt command.

# salt -N group1 state.highstate

The above command will only run the highstate on minions within the group1 nodegroup.


Defining unique grains is another way of grouping minions. Grains are kind of like static variables for minions in SaltStack; by default grains will contain information such as network configuration, hostnames, device information and OS version. They are set on the minions during start time and they do not change, this makes them a great candidate to use to identify groups of minions.

To use grains to segregate hosts we must first create a grain that will have different values for each group of hosts. To do this we will create a grain called group the value of this grain will be either group1 or group2. If we have 10 hosts, 5 of those hosts will be given a value of group1 and the other 5 will be given a value of group2.

There are a couple of ways to set grains, we can do it either by editing the /etc/salt/minion configuration file or the /etc/salt/grains file on the minion servers. I personally like putting grains into the /etc/salt/grains file and that's what I will be showing in this example.

Setting grains

To set our group grain we will edit the /etc/salt/grains file.

# vi /etc/salt/grains


group: group1

Since grains are only set during start of the minion service we will need to restart the salt-minion service.

# /etc/init.d/salt-minion restart Targeting hosts with grains

Now that our grain is set we can target our groups using the -G flag of the salt command.

# salt -G group:group2 state.highstate

The above command will only run the highstate function on minions where the grain group is set to group2

Using batch-size and unique identifiers together

At some point, after creating nodegroups and grouping grains you may find that you still want to deploy changes to only a percentage of those minions.

Luckily we can use --batch-size and nodegroup and grain targeting together. Let's say you have 100 webservers, and you split your webservers across 4 nodegroups. If you spread out the hosts evenly each nodegroup would have 25 hosts within it, but this time restarting all 25 hosts is not what you want. Rather you would prefer to only restart 5 hosts at a time, you can do this with batch size and nodegroups.

The command for our example above would look like the following.

# salt -b 5 -N group1 state.highstate

This command will update the group1 nodegroup, 5 minions at a time.

Scheduling updates

The above examples are great for ad-hoc highstates across your minion population, however that only fixes highstates being pushed manually. By scheduling highstate runs, we can make sure that hosts get the proper configuration automatically without any human interaction, but again we have to be careful with how we schedule these updates. If we simple told each minion to update every 5 minutes, those updates would surely overlap at some point.

Using Scheduler to schedule updates

The SaltStack scheduler system is a great tool for scheduling salt tasks; especially the highstate function. You can configure scheduler in SaltStack two ways, by appending the configuration to the /etc/salt/minion configuration file on each minion or by setting the schedule configuration as a pillar for each minion.

Setting the configuration as a pillar is by far the easiest, however the version of SaltStack I am using 0.16 has a bug where setting the scheduler configuration in the pillar does not work. So the example I am going to show is the first method. We will be appending the configuration to the /etc/salt/minion configuration file, we are also going to use SaltStack to deploy this file as we might as well tell SaltStack how to manage itself.

Creating the state file

Before adding the schedule we will need to create the state file to manage the minion config file.

Create a saltminion directory

We will first create a directory called saltminion in /srv/salt which is the default directory for salt states.

# mkdir -p /srv/salt/saltminion Create the SLS

After creating the saltminion directory we can create the state file for managing the /etc/salt/minion configuration file. By naming the file init.sls we can reference this state as saltminion in the top.sls file.

# vi /srv/salt/saltminion/init.sls


salt-minion: service: - running - enable: True - watch: - file: /etc/salt/minion /etc/salt/minion: file.managed: - source: salt://saltminion/minion - user: root - group: root - mode: 640 - template: jinja - context: saltmaster: {% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above state file might look a bit daunting but it is pretty easy, the first section ensures that the salt-minion service is running and enabled. It also watched the /etc/salt/minion config file and if it changes than salt will restart the service. The second section is where things get a bit more complicated. The second section manages the /etc/salt/minion configuration file, most of this is standard salt stack configuration management. However, you may have noticed a part that looks a bit different.

{% if "group1" in grains['group'] %} timer: 20 {% else %} timer: 15 {% endif %}

The above is an example of using jinja inside of a state file. You can use the jinja templating in SaltStack to create complicated statements. The above will check if the grain "group" is set to group1, if it is set then it will add set the timer context to 20. If it is not set than it will default to a context of 15.

Create a template minion file

In the above salt state we told SaltStack that the salt://saltminion/minion file is a template, and that template file is a jinja template. This tells SaltStack to read the minion file and use the jinja templating language to parse it. The items under context are variables being passed to jinja while processing the file.

At this point it would probably be a good idea to actually create the template file, to do this we will start with a copy from the master server.

# cp /etc/salt/minion /srv/salt/saltminion/

Once we copy the file into the saltminion directory we will need to add the appropriate jinja markup.

# vi /srv/salt/saltminion/minion

First we will add the saltmaster variable, which will be used to tell the minions which master to connect to. In our case this will be replaced with


#master: salt

Replace with:

master: {{ saltmaster }}

After adding the master configuration, we can add the scheduler configuration to the same file. We will add the following to the bottom of the minion configuration file.


schedule: highstate: function: state.highstate minutes: {{ timer }}

In the scheduler configuration the timer variable will be replaced with either 15 or 20 depending on the group grain that is set on the minion. This will tell the minion to run a highstate every 15 or 20 minute, that should give approximately 5 minutes between groups. The timing of this may need adjustment depending on the environment. When dealing with large amounts of servers you may need to build in a larger time between highstates between the groups.

Deploying the minion config

Now that we have created the minion template file, we will need to deploy it to all of the minions. Since they don't already automatically update we can run an ad-hoc highstate from the master. Because we are restarting the minion service we may want to use --batch-size to stagger the updates.

# salt -b 10% '*' state.highstate

The above command will update all minions but only 10% of them at a time.

Using cron on the minions to schedule updates

An alternative to using SaltStacks scheduler is cron, the cron service was the default answer for scheduling highstates before the scheduler system was added into SaltStack. Since we are deploying a configuration to the minions to manage highstates, we can use salt to automate and managed this.

Creating the state file

Like with the scheduler option we will create a saltminion directory within the /srv/salt directory.

# mkdir -p /srv/salt/saltminion Create the SLS file

There are a few ways you can create crontabs in salt, but I personally like just putting a file in /etc/cron.d as it makes the management of the crontab as simple as managing any other file in salt. The below SLS file will deploy a templated file /etc/cron.d/salt-highstate to all of the minions.

# vi /srv/salt/saltminion/init.sls


/etc/cron.d/salt-highstate: file.managed: - source: salt://saltminion/salt-highstate - user: root - group: root - mode: 640 - template: jinja - context: updategroup: {{ grains['group'] }} Create the cron template

Again we are using template files and jinja to determine which crontab entry should be used. We are however performing this a little differently. Rather than putting the logic into the state file, we are putting the logic in the source file salt://saltminion/salt-highstate and simply passing the grains['group'] value to the template file in the state configuration.

# vi /srv/salt/saltminion/salt-highstate


{{ if "group1" in grains['group'] }} */20 * * * * root /usr/bin/salt-call state.highstate {{ else }} */15 * * * * root /usr/bin/salt-call state.highstate {{ endif }}

One advantage of cron over salt's scheduler is that you have a bit more control of when the highstate runs. The scheduler system runs over an interval with the ability to define seconds, minutes, hours or days. Whereas cron gives you that same ability but also allows you to define complex schedules like, "only run every Sunday if it is the 15th day of the month". While that may be a bit overkill for most, some may find that the flexibility of cron makes it easier to avoid both groups updating at the same time.

Using cron on the master to schedule updates with batches

If you want to run your highstates more frequently and avoid conditions where everything gets updated at the same time. Rather than scheduling updates from the minions, one could schedule the update from the salt master. By using cron on the master, we can use the same ad-hoc salt commands as above but call them on a scheduled basis. This solution is somewhat a best of both worlds scenario. It gives you an easy way of automatically updating your hosts in different batches and it allows you to roll the update to those groups a little at a time.

To do this we can create a simple job in cron, for consistency I am going to use /etc/cron.d but this could be done via the crontab command as well.

# vi /etc/cron.d/salt-highstate


0 * * * * root /usr/bin/salt -b 10% -G group:group1 state.highstate 30 * * * * root /usr/bin/salt -b 10% -G group:group2 state.highstate

The above will run the salt command for group1 at the top of the hour every hour and the salt command for group2 at the 30th minute of every hour. Both of these commands are using a batch size of 10% which will tell salt to only update 10% of the hosts in that group at a time. While this method might have some hosts in group1 being updated while group2 is getting started, overall it is fairly safe as it ensures that the highstate is only running on at most 20% of the infrastructure at a time.

One thing I advise it to make sure that you also segregate these highstates by server role as well. If you have a cluster of 10 webservers and only 2 database servers, all of those servers are split amongst group1 and group2; with the right timing both databases could be selected for a highstate at the same time. To avoid this you could either have your "group" grains be specific to the server roles or setup nodegroups that are specific to server roles.

An example of this would look like the following.

0 * * * * root /usr/bin/salt -b 10% -N webservers1 state.highstate 15 * * * * root /usr/bin/salt -b 10% -N webservers2 state.highstate 30 * * * * root /usr/bin/salt -b 10% -N alldbservers state.highstate

This article should give you a pretty good jump start on staggering highstates, or really any other salt function you want to perform. If you have implemented this same thing in another way I would love to hear it, feel free to drop your examples in the comments.

Originally Posted on Go To Article
Categories: FLOSS Project Planets

Creating jigsaw of image using gimp

Sat, 2014-05-17 08:19
Using GIMP any photo can be made to look like a jigsaw puzzle easily. Let us take the following image for an example.

Launch gimp and click on


Browse to location where the image is stored and open it in gimp.

Now click on


The following window will appear.

The number of tiles option lets us choose the number of horizontal and vertical divisions that are needed for the image, higher these numbers more the number of jigsaw pieces will appear on the image.

Bevel width allows us to choose the degree of slope of each pieces's edge

Highlight lets us choose how strongly should the pieces appear in the image.

Style of the jigsaw pieces can be set to either square or curved.

Click on OK after setting the required options.

The image should appear as below depending on what values were set in the options.

Categories: FLOSS Project Planets

Thank You India for making Narendra Modi as Prime Minister of India

Fri, 2014-05-16 22:05
Hello All, Firstly I would like to Congratulate Shri Narendra Modi for becoming Next Prime Minister of India. This Election was special, tough and very Surprising for not only Political parties but also People of India. Specially This election was toughest for BJP as BJP was struggling to get into the power from Last 10 […]
Categories: FLOSS Project Planets