LinuxPlanet

Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 5 hours 7 min ago

8 examples of Bash if statements to get you started

Mon, 2014-01-27 03:00

Shell scripting is a fundamental skill that every systems administrator should know. The ability to script mundane & repeatable tasks allows a sysadmin to perform these tasks quickly. These scripts can be used for anything from installing software, configuring software or quickly resolving a known issue.

A fundamental core of any programming language is the if statement. In this article I am going to show several examples of using if statements and explain how they work.

If value equals 1

The first example is one of the most basic examples, if true.

if [ $value -eq 1 ] then echo "has value" fi

Now this alone doesn't seem all that amazing but when you combine it with other commands, like for example checking to see if a username exists in the passwd file.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -eq 1 ] then echo "I found Benjamin" fi

The grep -ic command tells grep to look for the string and be case insensitive, and to count the results. This is a simple and fast way of checking whether a string exists within a file and if it does perform some action.

Adding the else

The above if statement works great for checking if the user exists but what happens when the user doesn't exist? Right now, nothing but we can fix that.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -eq 1 ] then echo "I found Benjamin" else echo "I didn't find Benjamin" fi

The else statement is part of an if statement, its actions are only performed when the if statements comparison operators are not true. This is great for performing a check and executing a specific command if the test is true, and a different command if the test is false.

Checking if value is greater or less than

In the previous two examples you can see the use of the -eq equals operator, in this example I am going to show the -gt greater than and -lt less than operators.

First let us start with the greater than operator.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -gt 5 ] then echo "I found a lot of Benjamins..." fi

Second we will use the less than operator.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -lt 5 ] then echo "I found only a few Benjamins..." fi Using else if

While it would be easy enough to simply add an else to either the less than or greater than examples to handle conditions where I found more or less "Benjamins" than the if statement is looking for. I can also use the elif statement to perform an additional if statement if the first one wasn't found to be true.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -eq 1 ] then echo "I found one Benjamin" elif [ $value -gt 1 ] then echo "I found multiple Benjamins" else echo "I didn't find any Benjamins" fi

The order of this if statement is extremely important, you will notice that I first check if the value is specifically 1. If the value is not specifically 1 I then check if the value is greater than 1, if it isn't 1 or greater then 1 I simply tell you that I didn't find any Benjamins. This is using the elif or else if statement.

Nested if statements

A nested if statement is where you have an if statement inside of an existing if statement.

value=$( grep -ic "benjamin" /etc/passwd ) if [ $value -ge 1 ] then if [ $value -eq 1 ] then echo "I found Benjamin" elif [ $value -eq 2 ] then echo "I found two Benjamins" else echo "There are too many Benjamins" fi fi

Let us break down how the above statements work together. First we execute the grep and send its value to the value variable. Our if statement will check if the value of the value variable is -ge greater than or equal to 1. If it is than we will execute the second if statement and see if it is either equal to 1, and if not equal to exactly 1 if it is equal to exactly 2. If it is not 1 or 2 it must be much larger so at this point it will give up checking and say "There are too many Benjamins".

Checking if a string value is set

The above examples show some good examples of using integer based operators. If you are wondering what the heck an operator is, than let me explain. The -eq in the statement is an operator or in simpler terms a comparison, it is used to tell bash an operation to perform to find true or false. An if statement is always going to be true or false, either the operator is correct and it is true, or the operator is incorrect and it is false.

There are a ton of operators in bash that will take more than 8 examples to get you through but for now, the examples in today's article should get you started.

In this example I am going to show how you can check if a variable has a string value.

if [ -n $value ] then echo "variable value has a value or $value" fi

The -n operator is for checking if a variable has a string value or not. It is true if the variable has a string set. This is a great way to test if a bash script was given arguments or not, as a bash scripts arguments are placed into variables $1, $2, $3 and so on automatically.

Usually though in a bash script you want to check if the argument is empty rather than if it is not empty, to do this you can use the -z operator.

if [ -z $1 ] then echo "sorry you didn't give me a value" exit 2 fi If value is not true

The -z operator is the opposite of -n, you could get the same results by performing this if statement with the ! NOT operator. When a ! operator is added to an if statement it takes the existing operator and inverts the statement. So a ! -lt turns into "not less than".

if [ ! -n $1 ] then echo "sorry you didn't give me a value" elif [ ! -z $1 ] then echo "hey thanks for giving me a value" fi

The "not" operator can be extremely useful, to be honest I didn't even know the -n operator existed until writing this article. Usually whenever I wanted to see if a string was not empty I would simply use ! -z to test it. Considering the ! operator can be used in any if statements test, this can be a huge time saver sometimes.

Using AND & OR in if statements

The final if statement example that I am going to show is using && AND & || OR in an if statement. Let's say you have a situation where you are accepting an argument from a bash script and you need to not only check if the value of this argument is set but also if the value is valid.

That's where AND comes in.

if [[ -n $1 ]] && [[ -r $1 ]] then echo "File exists and is readable" fi

The above if statement will check if the $1 variable is not empty, and if it is not empty than it will also check if the file provided in $1 is readable or not. If it is, then it will echo "File exists and is readable", if either of these two tests are false. Than the if statement will not be executed.

Using our bash script example, many times you are going to want to check if either $1 has no value or the file is unreadable in order to alert the user and exit the script. For this we will use the OR statement.

if [[ -z $1 ]] || [[ ! -r $1 ]] then echo "Either you didn't give me a value or file is unreadable" exit 2 fi

In the above example we want to exit the script if either of these conditions are true, that is exactly what OR provides for us. If either one of these conditions is true the if statement will cause the script to exit.

You may notice in the above if statements I am using a double brackets rather than single brackets. When using && or || it is generally a good idea to use the double brackets as this opens up some additional functionality in bash. You will also find in older implementations of bash a single bracket being used with && can cause some syntax issues. Though this seems to have been remediated in newer implementations, it is always a good idea to assume the worst case and write your scripts to handle older bash implementations. You never know where you might find yourself running a script on a system that hasn't been updated in a while.

The above should get you started on writing if statements in bash, I am have barely touched on many of the cool ways you can perform tests with bash. I am sure some of you readers may have some that you want to share, if you do drop by the comments and share away.


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

GCC, LLVM, Copyleft, Companies, and Non-Profits

Sun, 2014-01-26 11:45

[ Please keep in mind in reading this post that while both FSF and Conservancy are mentioned, and that I have leadership roles at both organizations, these opinions on ebb.org, as always, are my own and don't necessarily reflect the view of FSF and/or Conservancy. ]

Most people know I'm a fan of RMS' writing about Free Software and I agree with most (but not all) of his beliefs about software freedom politics and strategy. I was delighted to read RMS' post about LLVM on the GCC mailing list on Friday. It's clear and concise, and, as usual, I agree with most (but not all) of it, and I encourage people to read it. Meanwhile, upon reading comments on LWN on this post, I felt the need to add a few points to the discussion.

Firstly, I'm troubled to see so many developers, including GCC developers, conflating various social troubles in the GCC community with the choice of license. I think it's impossible to deny that culturally, the GCC community faces challenges, like any community that has lasted for so long. Indeed, there's a long political history of GCC that even predates my earliest involvement with the Free Software community (even though I'm now considered an old-timer in Free Software in part because I played a small role — as a young, inexperienced FSF volunteer — in helping negotiate the EGCS fork back into the GCC mainline).

But none of these politics really relate to GCC's license. The copyleft was about ensuring that there were never proprietary improvements to the compiler, and AFAIK no GCC developers ever wanted that. In fact, GCC was ultimately the first major enforcement test of the GPL, and ironically that test sent us on the trajectory that led to the current situation.

Specifically, as I've spoken about in my many talks on GPL compliance, the earliest publicly discussed major GPL violation was by NeXT computing when Steve Jobs attempted and failed (thanks to RMS' GPL enforcement work) to make the Objective C front-end to GCC proprietary. Everything for everyone involved would have gone quite differently if that enforcement effort had failed.

As it stands, copyleft was upheld and worked. For years, until quite recently (in context of the history of computing, anyway), Apple itself used and relied on the Free Software GCC as its primary and preferred Objective C compiler, because of that enforcement against NeXT so long ago. But, that occurrence also likely solidified Jobs' irrational hatred of copyleft and software freedom, and Apple was on a mission to find an alternative compiler — but writing a compiler is difficult and takes time.

Meanwhile, I should point out that copyleft advocates sometimes conflate issues in analyzing the situation with LLVM. I believe most LLVM developers when they say that they don't like proprietary software and that they want to encourage software freedom. I really think they do. And, for all of us, copyleft isn't a religion, or even a belief — it's a strategy to maximize software freedom, and no one (AFAICT) has said it's the only viable strategy to do that. It's quite possible the strategy of LLVM developers of changing the APIs quickly to thwart proprietarization might work. I really doubt it, though, and here's why:

I'll concede that LLVM was started with the best of academic intentions to make better compiler technology and share it freely. (I've discussed this issue at some length with Chris Lattner directly, and I believe he actually is someone who wants more software freedom in the world, even if he disagrees with copyleft as a strategy.) IMO, though, the problem we face is exploitation by various anti-copyleft, software-freedom-unfriendly companies that seek to remove every copyleft component from any software stack. Their reasons for pursuing that goal may or may not be rational, but its collateral damage has already become clear: it's possible today to license proprietary improvements to LLVM that aren't released as Free Software. I predict this will become more common, notwithstanding any technical efforts of LLVM developers to thwart it. (Consider, by way of historical example, that proprietary combined works with Apache web server continue to this very day, despite Apache developers' decades of we'll break APIs, so don't keep your stuff proprietary claims.)

Copyleft is always a trade-off between software freedom and adoption. I don't admonish people for picking the adoption side over the software freedom side, but I do think as a community we should be honest with ourselves that copyleft remains the best strategy to prevent proprietary improvements and forks and no other strategy has been as successful in reaching that goal. And, those who don't pick copyleft have priorities other than software freedom ranked higher in their goals.

As a penultimate point, I'll reiterate something that Joe Buck pointed out on the LWN thread: a lot of effort was put in to creating a licensing solution that solved the copyleft concerns of GCC plugins. FSF's worry for more than a decade (reaching back into the late 1990s) was that a GCC plugin architecture would allow writing to an output file GCC's intermediate representation, which would, in turn, allow a wholly separate program to optimize the software by reading and writing that file format, and thus circumvent the protections of copyleft. The GCC Runtime Library Exception (GCC RTL Exception) is (in my biased opinion) an innovative licensing solution that solves the problem — the ironic outcome: you are only permitted to perform proprietary optimization with GCC on GPL'd software, but not on proprietary software.

The problem was that the GCC RTL Exception came too late. While I led the GCC RTL Exception drafting process, I don't take the blame for delays. In fact, I fought for nearly a year to prioritize the work when FSF's outside law firm was focused on other priorities and ignored my calls for urgency. I finally convinced everyone, but the work got done far too late. (IMO, it should have been timed for release in parallel with GPLv3 in June 2007.)

Finally, I want to reiterate that copyleft is a strategy, not a moral principle. I respect the LLVM developers' decision to use a different strategy for software freedom, even if it isn't my preferred strategy. Indeed, I respect it so much that I supported Conservancy's offer of membership to LLVM in Software Freedom Conservancy. I still hope the LLVM developers will take Conservancy up on this offer. I think that regardless of a project's preferred strategy for software freedom — copyleft or non-copyleft — that it's important for the developers to have a not-for-profit charity as a gathering place for developers, separate from their for-profit employer affiliations.

Undue for-profit corporate influence is the biggest problem that software freedom faces today. Indeed, I don't know a single developer in our community who likes to see their work proprietarized. Developers, generally speaking, want to share their code with other developers. It's lawyers and business people with dollar signs in their eyes who want to make proprietary software. Those people sometimes convince developers to make trade-offs (which I don't agree with myself) to work on proprietary software (— usually in exchange for funding some of their work time on upstream Free Software). Meanwhile, those for-profit-corporate folks frequently spread lies and half-truths about the copyleft side of the community — in an effort to convince developers that their Free Software projects “won't survive” if those developers don't follow the exact plan The Company proposes. I've experienced these manipulations myself — for example, in April 2013, a prominent corporate lawyer with an interest in LLVM told me to my face that his company would continue spreading false rumors that I'd use LLVM's membership in Conservancy to push the LLVM developers toward copyleft, despite my public statements to the contrary. (Again, for the record, I have no such intention and I'd be delighted to help LLVM be led in a non-profit home by its rightful developer leaders, whichever Open Source and Free Software license they chose.)

In short, the biggest threat to the future of software has always been for-profit companies who wish to maximize profits by exploiting the code, developers and users while limiting their software freedom. Such companies try every trick in pursuit of that goal. As such, I prefer copyleft as a strategy. However, I don't necessarily admonish those who pick a different strategy. The reason that I encourage membership of non-copylefted projects in Conservancy (and other 501(c)(3) charities) is to give those projects the benefits of a non-profit home that maximize software freedom using the project's chosen strategy, whatever it may be.

Categories: FLOSS Project Planets

QR code: GUI tool for encoding decoding QR codes in linux

Sat, 2014-01-25 22:49
In the post "QR code: Encode and Decode QR code on linux command line" we saw usage of a command based tool to create QR codes in linux. Let us have a look at a GUI based tool for the same.

QTQR is gui a tool to create QR (Quick Response) code. We will need to install the tool qtqr

$ sudo apt-get install qtqr

Once installed we can launch the tool from GUI in debian under Applications->graphics->QTQR



We can launch it from terminal using the command qtqr

This will launch a window as shown below.



To create a QR code we can choose what kind of string we want to encode using the pull down menu at the left top as shown below.



Once selected we can enter the text in the text below at the left and then click on "save QR code" at the right bottom and save the QR image.



The QR image for the text "Hello" would look as below.



To decode a QR code using QTQR click on decode at the right corner. While selecting decode we can either choose a file or choose an image being viewed by a webcam. To test the decode, point to the same QR image of "hello" that we created above and click on open.



As we can see the tool has correctly decoded the image.

Useful links
http://code.google.com/p/qtqr/
http://en.wikipedia.org/wiki/QR_code
Categories: FLOSS Project Planets

Choosing Software Freedom Costs Money Sometimes

Fri, 2014-01-24 15:19

Apparently, the company that makes my hand lotion brand uses coupons.com for its coupons. The only way to print a coupon is to use a proprietary software browser plugin called “couponprinter.exe” (which presumably implements some form of “coupon DRM).

So, as for, I actually have a price, in dollars, that it cost me to avoid proprietary software. Standing up for software freedom cost me $1.50 today. :) I suppose there are some people who would argue in this situation that they have to use proprietary software, but of course I'm not one of them.

The interesting thing is that this program has a OS X and Windows version, but nothing for iOS and Android/Linux. Now, if they had the latter, it'd surely be proprietary software anyway.

That said, coupons.com does have a send a paper copy to a postal address option, and I have ordered the coupon to be sent to me. But it expires 2014-03-31 and I'm out of hand lotion today; thus whether or not I get to use the coupon before expiration is an open question.

I'm curious to try to order as many copies as possible of this coupon just to see if they implement ARM properly.

ARM is of course not a canonical acronym to mean what I mean here. I mean “Analog Restrictions Management”, as opposed to the DRM (“Digital Restrictions Management”) that I was mentioned above. I doubt ARM will become a standard acronym for this, given the obvious overloading of ARM TLA, which is already quite overloaded.

Categories: FLOSS Project Planets

QR code: Encode and Decode QR code on linux command line

Fri, 2014-01-24 12:56
QR(Quick response) code as defined by google is

A machine-readable code consisting of an array of black and white squares, typically used for storing URLs or other information for reading by the camera on a smartphone.

Here is how we can create QR code in linux using command line.

Using command line:

To create QR code using command line we will need to install the package libqrencode3

$ sudo apt-get install libqrencode3

After successful install of the package we can encode any string of characters, text,url,numbers, into a QR code as shown below.

$ qrencode "string" -o "output file name"

Let us say we want to encode the string "Hello" into a QR code.

$ qrencode Hello -o hello

This will creat a hello.png file in the QR format as shown below.

To be able to decode a QR code on the command line we can use the package zbar.

$ sudo apt-get install zbar-tools

Once the package is installed we can use the command zbarimg to read a QR encoded file

$ zbarimg "image file name"

To decode the hello.png QR we created above

$ zbarimg hello QR-Code:Hello scanned 1 barcode symbols from 1 images in 0.05 seconds

Thus we can see that the QR-Code has been correctly recognized as "Hello "

Useful links: http://en.wikipedia.org/wiki/QR_code http://zbar.sourceforge.net/
Categories: FLOSS Project Planets

Please No 169.254/16

Thu, 2014-01-23 15:44
When you bring up at new LINUX OS installation it will typically [at least in the case of CentOS] have a route of 169.254/16 on every interface.  These routes are used to support the good and virtuous feature known as "zeroconf".  Sometimes however you do not want that route noise - especially if the host is going to be operating as a router or firewall.  Fortunately disabling this feature for this specific use-case is easy.

# netstat -rn
Kernel IP routing table
Destination     Gateway    Genmask      Flags  MSS Window irtt Iface
169.254.0.0     0.0.0.0    255.255.0.0  U        0 0         0 eth0
Text 1: The 169.254/16 route for zeroconf in the routing table.

Edit the /etc/sysconfig/network file and add a line reading "NOZEROCONF=yes".  Then either restart the networking stack or reboot the host.  No more zeroconf routes.
Categories: FLOSS Project Planets

Concerning Decorators

Mon, 2014-01-20 13:00
Categories: FLOSS Project Planets

Concerning Decorators

Mon, 2014-01-20 13:00
Categories: FLOSS Project Planets

Using OpenStack Swift as ownCloud Storage Backend

Mon, 2014-01-20 11:00
ownCloud helps us to access our files from anywhere in the world, without take the control of data from us. Traditionally server's local hard disks have been used to act as storage backend but these days, as the latency of networks is decreasing, storing data over network is becoming cheaper and safer (in terms of recovery). ownCloud is capable of using SFTP, WebDAV, SMB, OpenStack Swift and several other storage mechanisms. We'll see the usage of OpenStack Swift with ownCloud in this tutorial

At this point, the assumption is that we already have admin access to an ownCloud instance and we have set up OpenStack Swift somewhere. If not, to setup OpenStack Swift, follow this tutorial.

Step 1: External storage facilities are provided by an app known as "External storage support", written by Robin Appelman and Michael Gapczynski, which ships with ownCloud and is available on the apps dashboard. It is disabled by default, we need to enable it.

Step 2: We need to go to Admin page of the ownCloud installation and locate "External Storage" configuration area. We'll select "OpenStack Swift" from the drop down menu.

Step 3: We need to fill in the details and credentials. We'll need the following information:
  • Folder Name: A user friendly name for the storage mount point.
  • user: Username of the Swift user (required)
  • bucket : Bucket can be any random string (required). It is a container where all the files will be kept.
  • region: Region (optional for OpenStack Object Storage).
  • key: API Key (required for Rackspace Cloud Files). This is not required for OpenStack Swift. Leave it empty.
  • tenant: Tenant name (required for OpenStack Object Storage). Tenant name would be the same tenant of which the Swift user is a part of. It is created using OpenStack Keystone.
  • password: Password of the Swift user (required for OpenStack Object Storage)
  • service_name: Service Name (required for OpenStack Object Storage). This is the same name which was used while creating the Swift service
  • url: URL of identity endpoint (required for OpenStack Object Storage). It is the Keystone endpoint against which authorization will be done.
  • timeout: Timeout of HTTP requests in seconds (optional)

Just to get a better hold on things, check out the image of an empty configuration form and here is a filled up one.

Notice that if ownCloud is successfully able to connect and authorize then a green circle appear on the left side of the configuration. In case things don't work out as expected then check out the owncloud.log in the data directory of ownCloud instance.

That is it. Now ownCloud is now ready to use OpenStack Swift to store data.
Categories: FLOSS Project Planets

The Quest For The Lost Pointer

Sun, 2014-01-19 18:43
On the screen you have a pointer - it points at thing!  It is used to point at, select [highlight], drag, and numerous other things.  The mouse pointer has been there and looked more-or-less the same for decades now;  my pointer in GNOME Shell looks and works almost identically to the pointer I had on my GEOS desktop (1986).  It has stayed the same because it works.

But the pointer has a few new challenges - (1) displays are getting bigger and bigger, big displays are aggregated together [you would have to have been very wealthy in 1986 to do that] and (2) the DPI/resolution of displays are now soaring to ridiculous and pointless values [not that the pointlessness suppresses the squeals of geeker joy from tech fanboys and fangirls].   These two trends raise the problem - "where the @*&# is the pointer?!".  You look away from your vast panorama of high-DPI displays for just a moment... and then you have to find it again.

GNOME Shell provides two features that assist in the quest for the missing pointer.

The first is "Show location of pointer".  This is most easily accessed via the GNOME Tweak Tool and is located in the "Keyboard and Mouse" section.  Simply toggle the feature on.  Once the feature is enabled tapping either Ctrl key [alone, by itself, not in combination with any other key] will cause the pointer to strobe;  the location will immediately be apparent. 

As with any setting you can also manage it using the GSettings API or using the gsettings command line tool.  The path to the relevant key for "Show location of pointer" is "org.gnome.settings-daemon.peripherals.mouse/locate-pointer".  To enable this feature from the command line:

$ gsettings set org.gnome.settings-daemon.peripherals.mouse locate-pointer true
$ gsettings get org.gnome.settings-daemon.peripherals.mouse locate-pointer
trueThe second setting is not available for modification in GNOME Tweak Tool or the standard control center.  This is probably due the fact that you can render your desktop almost useless via senseless modification of this value.  This second feature is the ability to adjust the size of the pointer - so for very high DPI displays you can increase the size of the pointer.  You can also scale it down to where you cannot see the pointer at all [and if you do that, no, the software is not "broken"; the correct way to describe that condition is to say that the operator is "careless"].  The relevant setting is "org.gnome.desktop.interface/cursor-size".

$ gsettings get org.gnome.desktop.interface cursor-size
24
$ gsettings set org.gnome.desktop.interface cursor-size 36
$ gsettings get org.gnome.desktop.interface cursor-size
36
The appropriate value is an integer - so look at the current value and tweak it up or down until you find a comfortable pointer size.  You will notice that the change to a gsetting is immediate; there is not need to log-out or restart GNOME Shell.

If the command line is too intimidating for you - you can also adjust either of these configuration parameters, and a myriad of others, using the excellent dconf-editor GUI.  For most parameters dconf-editor even provides a bit of documentation, the range of appropriate values, and the general-purpose default.

Now you never need to hunt for the pointer; you can spend more time hunting the whumpus.

Categories: FLOSS Project Planets

The Quest For The Lost Pointer

Sun, 2014-01-19 18:43
On the screen you have a pointer - it points at thing!  It is used to point at, select [highlight], drag, and numerous other things.  The mouse pointer has been there and looked more-or-less the same for decades now;  my pointer in GNOME Shell looks and works almost identically to the pointer I had on my GEOS desktop (1986).  It has stayed the same because it works.

But the pointer has a few new challenges - (1) displays are getting bigger and bigger, big displays are aggregated together [you would have to have been very wealthy in 1986 to do that] and (2) the DPI/resolution of displays are now soaring to ridiculous and pointless values [not that the pointlessness suppresses the squeals of geeker joy from tech fanboys and fangirls].   These two trends raise the problem - "where the @*&# is the pointer?!".  You look away from your vast panorama of high-DPI displays for just a moment... and then you have to find it again.

GNOME Shell provides two features that assist in the quest for the missing pointer.

The first is "Show location of pointer".  This is most easily accessed via the GNOME Tweak Tool and is located in the "Keyboard and Mouse" section.  Simply toggle the feature on.  Once the feature is enabled tapping either Ctrl key [alone, by itself, not in combination with any other key] will cause the pointer to strobe;  the location will immediately be apparent. 

As with any setting you can also manage it using the GSettings API or using the gsettings command line tool.  The path to the relevant key for "Show location of pointer" is "org.gnome.settings-daemon.peripherals.mouse/locate-pointer".  To enable this feature from the command line:

$ gsettings set org.gnome.settings-daemon.peripherals.mouse locate-pointer true
$ gsettings get org.gnome.settings-daemon.peripherals.mouse locate-pointer
trueThe second setting is not available for modification in GNOME Tweak Tool or the standard control center.  This is probably due the fact that you can render your desktop almost useless via senseless modification of this value.  This second feature is the ability to adjust the size of the pointer - so for very high DPI displays you can increase the size of the pointer.  You can also scale it down to where you cannot see the pointer at all [and if you do that, no, the software is not "broken"; the correct way to describe that condition is to say that the operator is "careless"].  The relevant setting is "org.gnome.desktop.interface/cursor-size".

$ gsettings get org.gnome.desktop.interface cursor-size
24
$ gsettings set org.gnome.desktop.interface cursor-size 36
$ gsettings get org.gnome.desktop.interface cursor-size
36
The appropriate value is an integer - so look at the current value and tweak it up or down until you find a comfortable pointer size.  You will notice that the change to a gsetting is immediate; there is not need to log-out or restart GNOME Shell.

If the command line is too intimidating for you - you can also adjust either of these configuration parameters, and a myriad of others, using the excellent dconf-editor GUI.  For most parameters dconf-editor even provides a bit of documentation, the range of appropriate values, and the general-purpose default.

Now you never need to hunt for the pointer; you can spend more time hunting the whumpus.

Categories: FLOSS Project Planets

Some Notes on Randomness

Fri, 2014-01-17 12:04

Judging the ‘quality’ of random data is indeed a complex statistical problem. The dieharder suite has been designed around this issue, which consists of a number of tests that establish a criteria for an RNG to be good, and fail if it isn’t met. The tests differ significantly from each other; some of these may be easier to pass than the others.

A PRNG needs to be initialized using a ‘seed’, which may be taken from the machine-available sources of entropy (/dev/random, /dev/urandom on Linux). Since it works on deterministic algorithms, for the same seed, the PRNG produces the same results on each run. So, if the seed is compromised, the adversary may be able to predict the sequence produced by the PRNG. Since /dev/random and /dev/urandom are char devices, their concurrent reads from multiple applications will preserve the uniqueness of the data received by each application. This implies that if multiple PRNGs are being initialized at the same time, each of these receives a unique seed.

The OpenSSL PRNG, on Unix-like OSes it seeds itself using data obtained by reading /dev/urandom, /dev/random and /dev/srandom (on OpenBSD), spending 10 ms on each (openssl/crypto/rand/rand_unix.c). For the PRNG to be cryptographically secure, its initial seed must not become known. This, in case of OpenSSL, implies that the external devices it reads from must be reliable sources of randomness.

For machines that lack /dev/random as an option, Entropy Gathering Daemon can be used. It is a perl script which runs in the background, calling the programs available on the machine and using their results to slowly fill its entropy pool (egd.pl:175 – 321). OpenSSL can be configured to use EGD as a source of randomness. A virtual machine running on QEMU can also be fed from EGD but this is slow due to EGD’s way of working and is being investigated. Also, if EGD is not running in the `–bottomless` mode, it often blocks when being used with QEMU. So, it is not advisable to be used as of now.

Another source of randomness could be the hardware random number generators. These claim to be cryptographically secure and unpredictable, and are often very fast. But these are only as reliable as their manufacturer. Certain Intel processors provide an instruction that they claim returns reliable random numbers. But a recent revelation indicates that this instruction may have been rigged because of influence from the NSA. Therefore, depending on an HWRNG as the only source of randomness would be a bad idea. A better option is to just add this data to the main entropy pool along with data from other sources.


Categories: FLOSS Project Planets

Droopy = Simple Home Web Server

Tue, 2014-01-14 19:56
So, just recently a friend of mine wanted me to check out how Droopy worked.  He had a busy day, and besides checking out this utility should take two people, or someone that has two ways to access the internet. I'm just doing a little update on what I found, and to answer some questions about how it's setup and used.  The page where I downloaded it from had enough information that even I could
Categories: FLOSS Project Planets

Vim mode Irssi – Foresight Linux

Mon, 2014-01-13 20:14
Vim mode Irssi

An Irssi script that provides vim-like keybindings for the input line.

The script allows you to toggle between INSERT, COMMAND and EX modes.

Another useful feature is the mode indicator, best used in conjunction with uberprompt.

From the README, the following common keybindings are supported:

  • Command mode: Esc <Ctrl-C>
  • Cursor motion: h l 0 ^ $ <Space> <BS> f t F T
  • History motion: j k gg G gg
  • Cursor word motion: w b ge e W gE B E
  • Word objects: aw aW
  • Yank and paste: C<y p P>
  • Change and delete: c d
  • Delete at cursor: x X
  • Replace at cursor: r
  • Insert mode: i a I A
  • Switch case: ~
  • Repeat change: .
  • Repeat: ftFT: ; ,
  • Registers: "a-"z "" "0 "* "+ "_
  • Line-wise shortcuts: dd cc yy
  • Shortcuts: s S C D
  • Scroll the scrollback buffer: Ctrl-E Ctrl-D Ctrl-Y Ctrl-U Ctrl-F Ctrl-B
  • Switch to last active window: Ctrl-6/Ctrl-^
  • Switch split windows: <Ctrl-W j Ctrl-W k>
  • Undo/Redo: u Ctrl-R

Get the script at: https://github.com/shabble/irssi-scripts/tree/master/vim-mode

The post Vim mode Irssi – Foresight Linux appeared first on Foresight Linux.

Categories: FLOSS Project Planets

gist.io: Writing for hackers

Sun, 2014-01-12 21:07

Just recently heard of gist.io – a pastebin services that converts markdown formatted files from https://gist.github.com into HTML.

Useful for those times you want to quickly share info that’s off-topic to your blog and to an audience of non manpage readers. Why markdown? It’s prettier than plain text and syntactically much simpler than html. GitHub users should like this.

Also: images

You can embed images in posts too, and they’ll respect the width of your browser.

The post gist.io: Writing for hackers appeared first on Foresight Linux.

Categories: FLOSS Project Planets

bash copy file – almost every file

Sun, 2014-01-12 16:30
bash copy file – almost every file

I found an interesting trick in bash today that may help a few other folks as well. Occasionally I find that need to copy almost every file in a directory, except for one or two. Usually I’d copy everything and then delete the stragglers I didn’t want from the destination directory. There had to be a better way, but as I said I’m lazy. Turns out I found the better way today.

bash copy file

[tforsman@localhost ~]$ cp -r !(file_to_ignore)  /destination/

This little trick gets a bit better. Bash is slick enough to understand ‘or’ in this context. So I can also ignore multiple files if I need to

[tforsman@localhost ~]$ cp -r !(file_to_ignore| this-one-too)  /destination/

Hopefully someone else finds this helpful as well.

If you get a error like:

bash: !: event not found

Then  extglob is off. To turn it on:

[tforsman@localhost ~]$ shopt -s extglob

to enable extended pattern matching and then you can do a cp !(whatever) . and it will work. Good way to bash copy file.

ref: http://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html

The post bash copy file – almost every file appeared first on Foresight Linux.

Categories: FLOSS Project Planets

OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)

Sun, 2014-01-12 12:00
In this tutorial we'll setup OpenStack Swift which is the object store service. Swift can be used to store data with high redundancy. The nodes in Swift can be broadly classified in two categories:
  • Proxy Node: This is a public facing node. It handles all the http request for various Swift operations like uploading, managing and modifying metadata. We can setup multiple proxy nodes and then load balance them using a standard load balancer.
  • Storage Node: This node actually stores data. It is recommended to make this node private, only accessible via proxy node but not directly. Other than storage service, this node also houses container service and account service which are used for managing mapping of containers and accounts respectively. 
For a small scale setup, both proxy and storage node can reside on the same machine but avoid doing so for a bigger setup.
Step 1: Let us install all the required packages for Swift:
# yum install openstack-swift openstack-swift-proxy openstack-swift-account openstack-swift-container openstack-swift-object memcached

Step 2: Attach a disk which would be used for storage or chop off some disk space from the existing disk.
Using additional disks:
Most likely this is done when there is large amount of data to be stored. XFS is the recommended filesystem and is known to work well with Swift. If the additional disk is attached as /dev/sdb then following will do the trick:
# fdisk /dev/sdb
# mkfs.xfs /dev/sdb1
# echo "/dev/sdb1 /srv/node/partition1 xfs noatime,nodiratime,nobarrier,logbufs=8 0 0" >> /etc/fstab
# mkdir -p /srv/node/partition1
# mount /srv/node/partition1

Chopping off disk space from the existing disk:
We can chop off disk from existing disks as well. This is usually done for smaller installations or for "proof-of-concept" stage. We can use XFS like before or we can use ext4 as well.
# truncate --size=2G /tmp/swiftstorage
# DEVICE=$(losetup --show -f /tmp/swiftstorage)
# mkfs.ext4 $DEVICE
# mkdir -p /srv/node/partition1
# mount $DEVICE /srv/node/partition1 -t ext4 -o noatime,nodiratime,nobarrier,user_xattr

Step 3 (optional): Setup rsync to replicate the objects. In case replication or redundancy is not required, then  this step can be skipped.
uid = swift
gid = swift
log file = /var/log/rsyncd.log
pid file = /var/run/rsyncd.pid
address = <storage_local_net_ip>

[account]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/account.lock

[container]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/container.lock

[object]
max connections = 2
path = /srv/node/
read only = false
lock file = /var/lock/object.lock

Note that there can be multiple account, container and object sections if we wish to use multiple disks or partitions.
Enable rysnc in defaults and start the service:
# vim /etc/default/rsync
RSYNC_ENABLE = true
# service rsync start

Step 4: Setup the proxy node. The default config which is shipped with the Fedora 20 is good with minor changes. Open /etc/swift/proxy-server.conf and edit the [filter:authtoken] as below:
[filter:authtoken]
paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
admin_tenant_name = admin
admin_user = admin
admin_password = ADMIN_PASS
auth_host = 127.0.0.1
auth_port = 35357
auth_protocol = http
signing_dir = /tmp/keystone-signing-swift

Keep in mind that the admin token, admin_tenant_name and admin_user should be same which was used while setting up Keystone. If you have not installed and setup Keystone already, then check out this tutorial before you proceed.
Step 5: Now we will create the rings. Rings are mappings between the storage node components and the actual physical drive. Note that the create commands below has 3 numeric parameters in the end. The first parameter signifies the number of the swift partitions (not same as the disk partitions). Higher number of partitions ensure even distribution but also higher number of partitions put higher strain on the server. So we have to find a good trade off. The rule of thumb is to create about 100 swift partitions per drive. For that the first numeric parameter would be 7 which is (2^7=128, closest to 100). The second parameter defines the number of copies to create for the sake of replication. For a small instance with no rsync, set it to one but recommended is three. Last number is the time in hours before a specific partition can be moved in succession. Set it to a low number for testing but 24 is recommended for production instances.
# cd /etc/swift
# swift-ring-builder account.builder create 7 1 1
# swift-ring-builder container.builder create 7 1 1
# swift-ring-builder object.builder create 7 1 1

Add the device created above to the ring:
# swift-ring-builder account.builder add z1-127.0.0.1:6002/partition1 100
# swift-ring-builder container.builder add z1-127.0.0.1:6001/partition1 100
# swift-ring-builder object.builder add z1-127.0.0.1:6000/partition1 100

Rebalance the ring. This will ensure even distribution and minimal partition moves.
# swift-ring-builder account.builder rebalance
# swift-ring-builder container.builder rebalance
# swift-ring-builder object.builder rebalance

Set the owner and the group for the partitions
# chown -R swift:swift /etc/swift /srv/node/partition1

Step 6: Create the service and end point using Keystone.
# keystone service-create --name=swift --type=object-store --description="Object Store Service"
+-------------+----------------------------------+
|   Property  |              Value               |
+-------------+----------------------------------+
| description |       Object Store Service       |
|      id     | b230a3ecd12e4a52954cb24502be9d07 |
|     name    |              swift               |
|     type    |           object-store           |
+-------------+----------------------------------+

Copy the id from the output of the command above and use it to create the endpoint.# keystone endpoint-create --region RegionOne --service_id b230a3ecd12e4a52954cb24502be9d07 --publicurl "http://127.0.0.1:8080/v1/AUTH_\$(tenant_id)s" --adminurl http://127.0.0.1:8080/v1 --internalurl http://127.0.0.1:8080/v1

Step 7: Start the services and test it:# service memcached start
# for srv in account container object proxy  ; do sudo service openstack-swift-$srv start ; done
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass stat
IN_PASS stat
   Account: AUTH_939ba777082a4f988d5b70dc886459e3
Containers: 0
   Objects: 0
     Bytes: 0
Content-Type: text/plain; charset=utf-8
X-Timestamp: 1389435011.63658
X-Put-Timestamp: 1389435011.63658

Upload a file abc.txt to a Swift container myfiles like this:
# swift -V 2.0 -A http://127.0.0.1:5000/v2.0 -U admin -K pass upload myfiles abc.txt


The OpenStack Swift is ready to use.
Categories: FLOSS Project Planets

OpenStack 101: How to Setup OpenStack Keystone (OpenStack Identity Service)

Sat, 2014-01-11 13:22
OpenStack Keystone is an identity or authorization service. Before we can do anything on other OpenStack components, we have to authorize ourselves and only then the operation can proceed. Let us get acquainted with some terminologies before we proceed.
  • Token: An alphanumeric string which allows access to certain set of services depending up on the access level (role) of the user.
  • Service: An OpenStack service like Nova, Swift and Keystone itself.
  • Tenant: A group of users. 
  • Endpoint: A URL (may be private) used to access the service.
  • Role: The authorization level of a user.
Let us go ahead and build the Keystone service for our use.
Step 1: Fedora 20 has OpenStack Havan in its repositories so install it is not a pain at all. Additionally, we need MySQL (replaced by MariaDB in Fedora 20) where Keystone will save its data.# yum install openstack-utils openstack-keystone mysql-server
Step 2: Once the packages above are installed, we need to set a few things in keystone config. Find the lines and edit them to look like these:# vim /etc/keystone/keystone.conf[DEFAULT]admin_token = ADMIN_TOKEN...[sql]connection = mysql://keystone:KEYSTONE_DBPASS@controller/keystone
Note that ADMIN_TOKEN and KEYSTONE_DBPASS should be long and difficult to guess. Remember that ADMIN_TOKEN is the almighty token which will have full access to create and destroy users and services. Also several tutorials and the official docs use command openstack-config --set /etc/keystone/keystone.conf to do the changes that we just did manually. I do not recommend using the command. It created duplicate sections and entries for me which can be confusing down the line. 
Step 3: Set up MySQL/MariaDB (only required for the first run of MySQL) to set root password. # mysql_secure_installation
Now we need to create the required database and tables for Keystone to work. The command below will do that for us. It will ask us for root password to create the keystone use and the database.
# openstack-db --service keystone --init --password KEYSTONE_DBPASS
Step 4: Create the signing keys and certificates for the tokens.# keystone-manage pki_setup --keystone-user keystone --keystone-group keystone
Step 5: Set the file owners, just in case something messed up and start the service.
# chown -R keystone:keystone /etc/keystone/* /var/log/keystone/keystone.log
# service openstack-keystone start
# chkconfig openstack-keystone on

Step 6: Setup the required environment variables. This will save the effort of supplying all the information every time a Keystone command is executed. Note that by default the Keystone admin port is 35357. This can be changed in /etc/keystone/keystone.conf.# cat > ~/.keystonerc <<EOF
> export OS_SERVICE_TOKEN=ADMIN_TOKEN> export OS_SERVICE_ENDPOINT=http://127.0.0.1:35357/v2.0> export OS_USERNAME=admin> export OS_PASSWORD=ADMIN_PASS> export OS_TENANT_NAME=admin> export OS_AUTH_URL=http://127.0.0.1:35357/v2.0> EOF# . ~/.keystonerc

Step 7: Create the tenants, users and the Keystone service with endpoint.
Creating the tenant:
# keystone tenant-create --name=admin --description="Admin Tenant"
Creating the admin user:# keystone user-create --name=admin --pass=ADMIN_PASS --email=admin@example.com

Creating and adding admin user to admin role:# keystone role-create --name=admin# keystone user-role-add --user=admin --tenant=admin --role=admin
Creating Keystone service and endpoint:
# keystone service-create --name=keystone --type=identity --description="Keystone Identity Service"
+-------------+--------------------------------------+
| Property    | Value                                |
+-------------+--------------------------------------+
| description | Keystone Identity Service            |
| id          | c3dbb8aa4b27492f9c4a663cce0961a3     |
| name        | keystone                             |
| type        | identity                             |
+-------------+--------------------------------------+

Copy the id from the command above and use it in the command below:# keystone endpoint-create --service-id=c3dbb8aa4b27492f9c4a663cce0961a3 --publicurl=http://127.0.0.1:5000/v2.0 --internalurl=http://127.0.0.1:5000/v2.0 --adminurl=http://127.0.0.1:35357/v2.0

Step 8: Test the keystone service.# unset OS_SERVICE_TOKEN OS_SERVICE_ENDPOINT# keystone --os-username=admin --os-password=ADMIN_PASS --os-auth-url=http://127.0.0.1:35357/v2.0 token-get
A token with id, validity and other information will be returned.

Keystone is up and running. We'll create some services next tutorial.
Categories: FLOSS Project Planets

OpenStack 101: What is OpenStack?

Sat, 2014-01-11 13:15
OpenStack, in simple words, is an open source project which facilitates us to build our own cloud computing setup. In other words, it creates an Infrastructure as a Service (IaaS) on our own infrastructure. We can have Amazon AWS like service up and running quite fast and painlessly wherever we want. A lot of efforts have been taken to ensure that code written for Amazon AWS can be ported to any OpenStack installation easily.

Below is a small comparison (not exhaustive) between major OpenStack services and Amazon AWS to give you an idea about the compatibility.

OpenStack ServiceAmazon AWS ServiceNovaEC2CinderEBSSwiftS3KeystoneIAMGlanceAMIHorizonAWS Web ConsoleNeutronEC2 network components
OpenStack 101 is a tutorial series to simplify using OpenStack and integration of OpenStack with simple applications. It'll help you create OpenStack installations for "proof -of-concept" stage or hosting small IaaS service. For most of the part I have tried to keep the tutorials as close to official documentation as possible. Let me also state this loud and clear, OpenStack's documentation is really great. If you can, the please go through it. If you are done with "proof-of-concept" and are going to run production ready machines, then go through the official documentation. These tutorials will help you get started but are not a replacement for the docs.

I am going to use OpenStack Havana and will run it on Fedora 20 (latest at the time of writing, January 2014). All the commands and codes are tested well before putting them up here but if you see any errors, please point them out to me.

Contents:
OpenStack 101: What is OpenStack?
OpenStack 101: How to Setup OpenStack KeyStone (OpenStack Identity Service)
OpenStack 101: How to Setup OpenStack Swift (OpenStack Object Storage Service)
Categories: FLOSS Project Planets