We are still in early state of getting Foresight 3 up and going. But it’s possible to get something to run today.
Here is a fresh pic with F20, cinnamon and conary.
We will get some more information out, as some users might be very brave and want to try it. We need to try to make it easier for users to get something running without too much hassle.
Lately I've been working on a lot of automation and monitoring projects, a big part of these projects are taking existing scripts and modifying them to be useful for automation and monitoring tools. One thing I have noticed is sometimes scripts use exit codes and sometimes they don't. It seems like exit codes are easy for poeple to forget, but they are an incredibly important part of any script. Especially if that script is used for the command line.What are exit codes?
On Unix and Linux systems, programs can pass a value to their parent process while terminating. This value is referred to as an exit code or exit status. On POSIX systems the standard convention is for the program to pass 0 for successful executions and 1 or higher for failed executions.
Why is this important? If you look at exit codes in the context of scripts written to be used for the command line the answer is very simple. Any script that is useful in some fashion will inevitably be either used in another script, or wrapped with a bash one liner. This becomes especially true if the script is used with automation tools like SaltStack or monitoring tools like Nagios, these programs will execute scripts and check the status code to determine whether that script was successful or not.
On top of those reasons, exit codes exist within your scripts even if you don't define them. By not defining proper exit codes you could be falsely reporting successful executions which can cause issues depending on what the script does.What happens if I don't specify an exit code
In Linux any script run from the command line has an exit code. With Bash scripts, if the exit code is not specified in the script itself the exit code used will be the exit code of the last command run. To help explain exit codes a little better we are going to use a quick sample script.
Sample Script:#!/bin/bash touch /root/test echo created file
The above sample script will execute both the touch command and the echo command. When we execute this script (as a non-root user) the touch command will fail, ideally since the touch command failed we would want the exit code of the script to indicate failure with an appropriate exit code. To check the exit code we can simply print the $? special variable in bash. This variable will print the exit code of the last run command.
Execution:$ ./tmp.sh touch: cannot touch ‘/root/test’: Permission denied created file $ echo $? 0
As you can see after running the ./tmp.sh command the exit code was 0 which indicates success, even though the touch command failed. The sample script runs two commands touch and echo, since we did not specify an exit code the script exits with the exit code of the last run command. In this case, the last run command is the echo command, which did execute successfully.
Script:#!/bin/bash touch /root/test
If we remove the echo command from the script we should see the exit code of the touch command.
Execution:$ ./tmp.sh touch: cannot touch ‘/root/test’: Permission denied $ echo $? 1
As you can see, since the last command run was touch the exit code reflects the true status of the script; failed.Using exit codes in your bash scripts
While removing the echo command from our sample script worked to provide an exit code, what happens when we want to perform one action if the touch was successful and another if it was not. Actions such as printing to stdout on success and stderr on failure.Testing for exit codes
Earlier we used the $? special variable to print the exit code of the script. We can also use this variable within our script to test if the touch command was successful or not.
Script:#!/bin/bash touch /root/test 2> /dev/null if [ $? -eq 0 ] then echo "Successfully created file" else echo "Could not create file" >&2 fi
In the above revision of our sample script; if the exit code for touch is 0 the script will echo a successful message. If the exit code is anything other than 0 this indicates failure and the script will echo a failure message to stderr.
Execution:$ ./tmp.sh Could not create file Providing your own exit code
While the above revision will provide an error message if the touch command fails, it still provides a 0 exit code indicating success.$ ./tmp.sh Could not create file $ echo $? 0
Since the script failed, it would not be a good idea to pass a successful exit code to any other program executing this script. To add our own exit code to this script, we can simply use the exit command.
Script:#!/bin/bash touch /root/test 2> /dev/null if [ $? -eq 0 ] then echo "Successfully created file" exit 0 else echo "Could not create file" >&2 exit 1 fi
With the exit command in this script, we will exit with a successful message and 0 exit code if the touch command is successful. If the touch command fails however, we will print a failure message to stderr and exit with a 1 value which indicates failure.
Execution:$ ./tmp.sh Could not create file $ echo $? 1 Using exit codes on the command line
Now that our script is able to tell both users and programs whether it finished successfully or unsuccessfully we can use this script with other administration tools or simply use it with bash one liners.
Bash One Liner:$ ./tmp.sh && echo "bam" || (sudo ./tmp.sh && echo "bam" || echo "fail") Could not create file Successfully created file bam
The above grouping of commands use what is called list constructs in bash. List constructs allow you to chain commands together with simple && for and and || for or conditions. The above command will execute the ./tmp.sh script, and if the exit code is 0 the command echo "bam" will be executed. If the exit code of ./tmp.sh is 1 however, the commands within the parenthesis will be executed next. Within the parenthesis the commands are chained together using the && and || constructs again.
The list constructs use exit codes to understand whether a command has successfully executed or not. If scripts do not properly use exit codes, any user of those scripts who use more advanced commands such as list constructs will get unexpected results on failures.More exit codes
The exit command in bash accepts integers from 0 - 255, in most cases 0 and 1 will suffice however there are other reserved exit codes that can be used for more specific errors. The Linux Documentation Project has a pretty good table of reserved exit codes and what they are used for.
Originally Posted on BenCane.com: Go To Article
Do you want to make a computer function as a WLAN base station, so that other computers can use as it as their wifi access point? This can easily be done using the open source software Hostapd and compatible wifi hardware.
This is a useful thing to do if computer acting as a firewall or as a server in the local network, and you want to avoid adding new appliances that all require their own space and cables in you already crowded server closet. Hostapd enables you to have full control of your WLAN access point and also enhances security. By using Hostapd the system will be completely in your control, every line of code can be audited and the source of all software can be verified and all software can be updated easily. It is quite common that active network devices like wifi access points are initially fairly secure small appliances with Linux inside, but over time their vendors don’t provide timely security updates and local administrators don’t care to install them via some clumsy firmware upgrade mechanism. With a proper Linux server admins can easily SSH into it and run upgrades using the familiar and trusted upgrade channels that Linux server distributions provide.
The first step in creating wireless base station with Hostapd is to make sure the WLAN hardware supports running in access point mode. Examples are listed in the hostapd documentation. A good place to shop for WLAN cards with excellent Linux drivers is thinkpenguin.com and in their product descriptions the WLAN card supported operation modes are nicely listed.
The next step is to install the software called Hostapd by Jouni Malinen and others. This is a very widely used software and it most likely is available in your Linux distribution by default. Many of the WLAN router appliances available actually are small Linux computers running hostapd inside, so by running hostapd on a proper Linux computer will give you at least all the features available in the WIFI routers, including advanced authentication and logging.
Our example commands are for Ubuntu 14.04. You need to have access to install hostapd and dnsmasq Dnsmasq is a small DNS/DHCP server which we’ll use in this setup. To start simply run:sudo apt-get install hostapd dnsmasq
After that you need to create and edit the configuration file:zcat /usr/share/doc/hostapd/examples/hostapd.conf.gz | sudo tee -a /etc/hostapd/hostapd.conf
The configuration file /etc/hostapd/hostapd.conf is filled with configuration examples and documentation in comments. The relevant parts for a simple WPA2 protected 802.11g network with the SSID ‘Example-WLAN‘ and password ‘PASS‘ are:interface=wlan0 ssid=Example-WLAN hw_mode=g wpa=2 wpa_passphrase=PASS wpa_key_mgmt=WPA-PSK WPA-EAP WPA-PSK-SHA256 WPA-EAP-SHA256
Next you need to edit the network interfaces configuration to force the WLAN card to only run in the access point mode. Assuming that the access point network will use the address space 192.168.8.* the file /etc/network/interfaces should look something like this:# interfaces(5) file used by ifup(8) and ifdown(8) auto lo iface lo inet loopback auto wlan0 iface wlan0 inet static hostapd /etc/hostapd/hostapd.conf address 192.168.8.1 netmask 255.255.255.0
Then we need to have a DNS relay and DHCP server on our wlan0 interface so the clients actually get a working Internet connection, and this can be accomplished by configuring dnsmasq. Like hostapd it also has a very verbose configuration file /etc/dnsmasq.conf, but the relevant parts look like this:interface=lo,wlan0 no-dhcp-interface=lo dhcp-range=192.168.8.20,192.168.8.254,255.255.255.0,12h
Next we need to make sure that the Linux kernel forwards traffic from our wireless network onto other destination networks. For that you need to edit the file /etc/sysctl.conf and make sure it has lines like this:net.ipv4.ip_forward=1
We need to activate NAT in the built-in firewall of Linux to make sure the traffic going out uses the external address as its source address and thus can be routed back. It can be done for example by appending the following line to the file /etc/rc.local:iptables -t nat -A POSTROUTING -s 192.168.8.0/24 ! -d 192.168.8.0/24 -j MASQUERADE
Some WLAN card hardware might have a virtual on/off switch. If you have such hardware you might need to also run rfkill to enable the hardware using a command like rfkill unblock 0.
The same computer also runs Network Manager (as for example Ubuntu does by default) you need to edit it’s settings so that if won’t interfere with the new wifi access point. Make sure file /etc/NetworkManager/NetworkManager.conf looks like this:[main] plugins=ifupdown,keyfile,ofono dns=dnsmasq [ifupdown] managed=false
Now all configuration should be done. To be sure all changes take effect, finish by rebooting the computer.
If everything is working, a new WLAN network should be detected by other devices.
On the WLAN-server you’ll see similar output from these commands:
Paul W. Frields has written a summary of the talk on the Fedora Magazine.
The option to be used for creating a menu is --menu, the syntax being
whiptail --menu [value description...]
The value and description pair being the main part of the menu list. The value is the value that is assigned to the menu item and what follows is its description.
whiptail --menu "Options" 10 30 5 A Menu1 B Menu2
In the above Example A is the value Menu1 its description, B is the value and Menu2 its description. We can any number of such menu value pairs.
We can move between these menu value pairs using the arrow keys and which ever menu we select and press enter, that value of that menu item is sent to the standard error.
In the above example, if we press enter by selecting "B Menu2", value B is sent to the standard error.
To be able to store this value for futher use in a script we need to redirect the standard error to value, which can be done by swapping the file descriptors of standard output and standard error.
option=$(whiptail --menu "Options" 10 30 5 A Menu1 B Menu2 3>&1 1>&2 2>&3)
Which ever option is chosen the value of the same gets stored in the variable option.
Here is an example script which makes use of the menu list to get data from the user.
#!/bin/bash echo "This is an example creating menu using whiptail" sleep 2 echo "Choose the value for item from the options" name=$(whiptail --menu "Options" 10 30 5 "1" "Assign 1 to item" "2" "Assign 2 to item" 3>&1 1>&2 2>&3) echo "Hello $name" whiptail --ok-button Done --msgbox "Value assigned to item is $name" 10 34
Give the script execute permission and run it to see the output
$ chmod 777 whip_menu.sh $ ./whip_menu
Yesterday while re-purposing a server I was removing packages with apt-get and stumbled upon an interesting problem. After I removed the package and all of it's configurations, the subsequent installation did not re-deploy the configuration files.
After a bit of digging I found out that there are two methods for removing packages with apt-get. One of those method should be used if you want to remove binaries, and the other should be used if you want to remove both binaries and configuration files.What I did
Since the method I originally used caused at least 10 minutes of head scratching; I thought it would be useful to share what I did and how to resolve it.
On my system the package I wanted to remove was supervisor which is pretty awesome btw. To remove the package I simply removed it with apt-get remove just like I've done many times before.# apt-get remove supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: supervisor 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. After this operation, 1,521 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 14158 files and directories currently installed.) Removing supervisor ... Stopping supervisor: supervisord. Processing triggers for ureadahead ...
No issues so far, the package was removed according to apt without any issues. However, after looking around a bit I noticed that the /etc/supervisor directory still existed. As well as the supervisord.conf file.# ls -la /etc/supervisor total 12 drwxr-xr-x 2 root root 4096 Aug 17 19:44 . drwxr-xr-x 68 root root 4096 Aug 17 19:43 .. -rw-r--r-- 1 root root 1178 Jul 30 2013 supervisord.conf
Considering I was planning on re-installing supervisor and I didn't want to cause any weird configuration issues as I moved from one server role to another I did what any other reasonable Sysadmin would do. I removed the directory...# rm -Rf /etc/supervisor
I knew the supervisor package was removed, and I assumed that the package didn't remove the config files to avoid losing custom configurations. In my case I wanted to start over from scratch, so deleting the directory sounded like a reasonable thing.# apt-get install supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: supervisor 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/314 kB of archives. After this operation, 1,521 kB of additional disk space will be used. Selecting previously unselected package supervisor. (Reading database ... 13838 files and directories currently installed.) Unpacking supervisor (from .../supervisor_3.0b2-1_all.deb) ... Processing triggers for ureadahead ... Setting up supervisor (3.0b2-1) ... Starting supervisor: Error: could not find config file /etc/supervisor/supervisord.conf For help, use /usr/bin/supervisord -h invoke-rc.d: initscript supervisor, action "start" failed. dpkg: error processing supervisor (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: supervisor E: Sub-process /usr/bin/dpkg returned an error code (1)
However, it seems supervisor could not start after re-installing.# ls -la /etc/supervisor ls: cannot access /etc/supervisor: No such file or directory
There is good reason why supervisor wouldn't restart; because the /etc/supervisor/supervisord.conf file was missing. Shouldn't the package installation deploy the supervisord.conf file? Well, technically no. Not with the way I removed the supervisor package.Why it didn't work How remove works
If we look at apt-get's man page a little closer we can see why the configuration files are still there.remove remove is identical to install except that packages are removed instead of installed. Note that removing a package leaves its configuration files on the system.
As the manpage clearly says, remove will remove the package but leaves configuration files in place. This explains why the /etc/supervisor directory was lingering after removing the package; but it doesn't explain why a subsequent installation doesn't re-deploy the configuration files.Package States
If we use dpkg to look at the supervisor package, we will start to see the issue.# dpkg --list supervisor Desired=Unknown/Install/Remove/Purge/Hold | Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend |/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad) ||/ Name Version Architecture Description +++-================================-=====================-=====================-======================================== rc supervisor 3.0b2-1 all A system for controlling process state
With the dpkg package manager a package can have more states than just being installed or not-installed. In fact there are several package states with dpkg.
- not-installed - The package is not installed on this system
- config-files - Only the configuration files are deployed to this system
- half-installed - The installation of the package has been started, but not completed
- unpacked - The package is unpacked, but not configured
- half-configured - The package is unpacked and configuration has started but not completed
- triggers-awaited - The package awaits trigger processing by another package
- triggers-pending - The package has been triggered
- installed - The packaged is unpacked and configured OK
If you look at the first column of the dpkg --list it shows rc. The r in this column means the package is remove, which as we saw above means the configuration files are left on the system. The c in this column shows that the package is in the state of config-files. Meaning, only the configuration files are deployed on this system.
When running apt-get install the apt package manager will lookup the current state of the package, when it sees that the package is already in the config-files state it simply skips the configuration file portion of the package installation. Since I manually removed the configuration files outside of the apt or dpkg process the configuration files are gone and will not be deployed with a simple apt-get install.How to resolve it and remove configurations properly Purging the package from my system
At this point, I found myself with a broken installation of supervisor. Luckily, we can fix the issue by using the purge option of apt-get.# apt-get purge supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following packages will be REMOVED: supervisor* 0 upgraded, 0 newly installed, 1 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 1,521 kB disk space will be freed. Do you want to continue [Y/n]? y (Reading database ... 14158 files and directories currently installed.) Removing supervisor ... Stopping supervisor: supervisord. Purging configuration files for supervisor ... dpkg: warning: while removing supervisor, directory '/var/log/supervisor' not empty so not removed Processing triggers for ureadahead ... Purge vs Remove
The purge option of apt-get is similar to the remove function however with one difference. The purge option will remove both the package and configurations. After running apt-get purge we can see that the package was fully removed by running dpkg --list again.# dpkg --list supervisor dpkg-query: no packages found matching supervisor Re-installation without error
Now that the package has been fully purged, and the state of it is now not-installed; we can re-install without errors.# apt-get install supervisor Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: supervisor 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/314 kB of archives. After this operation, 1,521 kB of additional disk space will be used. Selecting previously unselected package supervisor. (Reading database ... 13833 files and directories currently installed.) Unpacking supervisor (from .../supervisor_3.0b2-1_all.deb) ... Processing triggers for ureadahead ... Setting up supervisor (3.0b2-1) ... Starting supervisor: supervisord. Processing triggers for ureadahead ...
As you can see from the output above, the supervisor package has been installed and started. If we check the /etc/supervisor directory again we can also see the necessary configuration files.# ls -la /etc/supervisor/ total 16 drwxr-xr-x 3 root root 4096 Aug 17 19:46 . drwxr-xr-x 68 root root 4096 Aug 17 19:46 .. drwxr-xr-x 2 root root 4096 Jul 30 2013 conf.d -rw-r--r-- 1 root root 1178 Jul 30 2013 supervisord.conf You should probably just use purge in most cases
After running into this issue I realized, most of the times I ran apt-get remove I really wanted the functionality of apt-get purge. While it is nice to keep configurations handy in case we need them after re-installation, using remove all the time also leaves random config files to clutter your system. Free to cause configuration issues when packages are removed then re-installed.
In the future I will most likely default to apt-get purge.
Originally Posted on BenCane.com: Go To Article
The syntax to use the password is
whiptail --passwordbox "Enter password" 10 30
The password entered will not be visible and will appear only as *. But by default the entered password is sent to the error console.
To use the password in the script, we will have to swap the standard error and standard output consoles which can be done as below.
input=$(whiptail --passwordbox "Enter password" 10 30 3>&1 1>&2 2>&3)
what ever password the user enters, it will be stored in the variable input.
Here is an example script, in which we will take the user password and verify if the same is correct or not.
#!/bin/bash pass="Hello" echo "This is an example of taking password using whiptail" sleep 2 input=$(whiptail --passwordbox "Enter password" 10 30 3>&1 1>&2 2>&3) if [ $pass == $input ] then whiptail --ok-button Done --msgbox "Correct password" 10 30 else whiptail --ok-button Done --msgbox "Incorrect password" 10 30 fi
Save the script, give it execute permission and run it. $ chmod 777 whiptail_password.sh $ ./whiptail_password.sh
Other than just displaying messages, scripts also take input from user. Whiptail can be used to create a small GUI for the user to enter his or her input.
To create an input box we need to use the option --inputbox, the syntax being
whiptail --inputbox "Enter anything" 10 30
By default whiptail passes the input entered to the standard error console, and in a usual shell we will be able see the entered text on the terminal after we exit from the whiptail window.
But if we want to use the entered input by storing it in some variable, we will have to go about a few manipulations as there is no direct way of doing it in whiptail.
The simple manipulation we will have to do is swap the standard error console and the standard output console which can be done by appending
3>&1 1>&2 2>&3
to the command.
Thus we can write the whiptail command as
$ val=$(whiptail --inputbox "Enter some text" 10 30 3>&1 1>&2 2>&3)
The varialbe val will get the value that is entered in the inputbox.
Here is an example script to see the use if inputbox
#!/bin/bash echo "This is an example of taking user input using whiptail" sleep 2 echo "Enter your name" name=$(whiptail --inputbox "Enter your name" 10 30 3>&1 1>&2 2>&3) echo "Hello $name" whiptail --ok-button Done --msgbox "Hello $name" 10 30
Give the script execute permission and run it to see the output
$ chmod 777 whip_input.sh $ ./whip_input
From the Bad Voltage site:
We return to our normal places around the globe and continue to bring you tasty things for your ears. Stuart Langridge, Jono Bacon, myself and an inexcusable absence of Bryan Lunduke think up a new competition involving things in our houses, and also discuss:
- Where’s our free culture revolution? Lawrence Lessig predicted a world filled with remix culture and set up the Creative Commons, but little to none of it has actually changed the way people think. Or has it? (2.25)
- Graham Morrison of Linux Voice answers some of the questions from our discussion in the previous show about their magazine, their process, and why the December issue comes out in June (17.45)
- Mashed Voltage: we start a new competition. Read about it on the forum and listen to it on the show, and enter before September 10th to celebrate remix culture (36.48)
- Jono reviews the Samsung Gear Live smartwatch and the team watch, smartly (40.25)
- Open source projects sometimes decide that they’ll build their own hardware. When they try, they fail. Why? What motivates the decision to do it in the first place? (54.22)
Listen to 1×22: Oval Ted Bag
As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.
The syntax to create a yesno box using whiptail is
whiptail --yesno "Choose yes or no" 10 20
If we select yes, the return value of the command will be 0, for no the return value will be 1. We can use the return value to decide the further course of action for the script.
Here is a script that keeps popping up the yes/no box as long as the user does not select "no" to stop the script.
#!/bin/bash echo "Whiptail yes no example" sleep 2 cont=0 while [ $cont -eq 0 ] do whiptail --ok-button Continue --msgbox "Press continue" 10 20 whiptail --yesno "Press yes to continue no to exit" 10 40 cont=$? done echo "Thank you"
One way of creating small GUIs in shell scripts is by the use of whiptail.
Here is how we can create a message box using whiptail. To create a message box we need to use the option --msgbox.
$ whiptail --msgmox
$ whiptail --msgbox Hello 10 20
The above command will create a message box of 10 X 20 with the text hello in it as shown below.
By default the text that appears for exiting the message box is OK.
We can changet the text using the option --ok-button
$ whiptail --ok-button Done --msgbox Hello 10 20
#!/bin/bash echo "Welcome to message box example" sleep 2 whiptail --msgbox Hello 10 20 echo "Message box example with Done button" sleep 2 whiptail --ok-button Done --msgbox Hello 10 20 echo "Thank you"
By default Kate is trying to open and save files starting in the ~/Documents folder. While this may be convenient for some, I’d like to change it. Try as I might I couldn’t find this option in the configuration of Kate. I remember this was an option, and it was removed. Then it was a command line option but kate –help-all shows it as been removed.
The only I found to change this is via KDE > System Settings > Account Details > Paths > Document Path:
Changing that to your desired directory works fine.