LinuxPlanet

Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 20 hours 29 min ago

Windows Remote Desktop on GNU/Linux

Tue, 2015-01-13 02:07

This is the accompanying shownotes for a Hacker Public Radio podcast episode.

I wrote a bash script to connect to various different windows servers from my GNU/Linux desktops. I had a few different requirements:

  • I should be able to call it based on hostname.
  • All windows should be 90% smaller than my screen.
  • It should map my keyboard.
  • It should map my local disk.
  • It should quickly timeout if the port is not available.

You can get the full script here, but let’s walk through it:

The first line calls bash and then gets the server name from the symlink that is calling the script. The port is set as “3389”, but you can change that if you like.

#!/bin/bash SERVER=`basename $0` PORT="3389"

The next few lines finds the smallest vertical and horizontal sizes, even if you are running multiple screens. Then it calculates 90% of that to use as the size.

h=$(echo "scale=0;(($(xrandr | grep '*+' | sed 's/x/ /g' | awk '{print $1}' | sort -n | head -1 )/100)*90)" | bc) v=$(echo "scale=0;(($(xrandr | grep '*+' | sed 's/x/ /g' | awk '{print $2}' | sort -n | head -1 )/100)*90)" | bc) SIZE=${h}x${v}

Next we set the default username and password. I have it ask me for my password but I put it in here as an example.

PASSWORD='defaultpassword' USERNAME='administrator' WORKGROUP='workgroup'

In some cases the credentials may be different, so I have a case statement that will cycle through the servers and apply the differences. Depending on your naming schemes you may be able to use regular expressions here to filter out groups of servers.

case "${SERVER}" in *server*) echo "Server ${SERVER}" PASSWORD='work_password' USERNAME='administrator' WORKGROUP='WORKGROUP' ;; *colo*) echo "Server ${SERVER}" PASSWORD='colo_server_password' USERNAME='administrator' WORKGROUP='COLODOMAIN' ;; some_server ) echo "Server ${SERVER}" PASSWORD='some_server_password' USERNAME='some_server_password' ;; *) echo "No match for ${SERVER}, using defaults" ;; esac

Next we use an inbuilt bash command to see if a remote port is open and timeout after one second.

timeout 1 bash -c "echo >/dev/tcp/${SERVER}/${PORT}"

I used to connect to rdp using the program rdesktop, but it is now of limited value due to the fact that there are many open bugs that are not getting fixed. Bugs such as Bug 1075697 - rdesktop cannot connect to systems using RDP version 6 or newer  and Bug 1002978 - Failed to negotiate protocol, retrying with plain RDP . I then switch to using xfreerdp. This is the client that is behind remmina.

You can use xfreerdp /kbd-list to get a list of the available keyboard layouts.

if [ $? -eq 0 ]; then echo "${SERVER}:${PORT} is open" xfreerdp /v:${SERVER} /size:${SIZE} /kbd-type:0x00000409 /t:${SERVER} /d:${WORKGROUP} /u:${USERNAME} /p:${PASSWORD} /a:drive,pc,/ /cert-ignore & else echo "${SERVER}:${PORT} is closed" fi

Next you will need to be sure that your host names are available, either in dns or in your /etc/hosts/ file. For example:

10.1.0.1 server1 10.1.0.2 server2 10.1.0.3 server3 10.2.0.1 coloserver1 10.2.0.2 coloserver2 10.2.0.3 coloserver3 192.168.1.1 some_server

Edit the script to your liking and then put it into your a directory in your path, possibly /usr/local/bash or ~/bin/. You can then make symbolic links to the servers to the bash script, also in a directory in your path, using the command:

ln -s /usr/local/bash/rdp.bash ~/bin/some_server chmod +x ~/bin/some_server

Which links the global rdp.bash script to your personal symlink, and makes it executable.

All that you need to do then is type the name of the server and a rdp screen should pop up.

In our example:

$ some_server

From there your Windows Server session should pop up.

Categories: FLOSS Project Planets

Concert for India’s Environment: Chinmaya Dunster

Sat, 2015-01-10 17:49
By Vasudev Ram



A really good music video by Chinmaya Dunster: Concert for India's Environment. I had first seen it a while ago. Happened to see it again today via a chain of links. Evergreen music and message.

"The video of this concert, blended with interviews with environmentalists, Indian school children reading their own poems on nature and stunning footage of the Indian wilderness, is available free on request at http://www.chinmaya-dunster.com/concert-environment-2.php."


- Vasudev Ram - Dancing Bison Enterprises Signup to hear about new products or services from me.

Contact Page


Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Good HN thread on scaling a web app to 10K users

Sat, 2015-01-10 16:27
By Vasudev Ram




Saw this today on Hacker News:

What does it take to run a web app with 5K – 10K users?

I read the thread. Found that it had a number of good comments, on both sides of the equation - the business side, i.e. acquiring, retaining and supporting users, and the technical side, i.e. scaling the hardware and software to manage the load on the app. Many of the people who commented, run their own web apps at the same or higher scale.

Overall, a worthwhile read, IMO.

- Vasudev Ram - Dancing Bison Enterprises

Signup to hear about new products or services from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

10 reasons to migrate to MariaDB (if still using MySQL)

Fri, 2015-01-09 09:00

The original MySQL was created by a Finnish/Swedish company, MySQL AB, founded by David Axmark, Allan Larsson and Michael “Monty” Widenius. The first version of MySQL appeared in 1995. It was initially created for personal usage but in a few years evolved into a enterprise grade database and it became the worlds most popular open source relational database software – and it still is. In January 2008, Sun Microsystems bought MySQL for $1 billion. Soon after, Oracle acquired all of Sun Microsystems after getting approval from the European Commission in late 2009, which initially stopped the transaction due to concerns that such a merger would harm the database markets as MySQL was the main competitor of Oracle’s database product.

Out of distrust in Oracle stewardship of MySQL, the original developers of MySQL forked it and created MariaDB in 2009. As time passed, MariaDB replaced MySQL in many places and everybody reading this article should consider it too.

At Seravo, we migrated all of our own databases from MySQL to MariaDB in late 2013 and during 2014 we also migrated our customer’s systems to use MariaDB.

We recommend everybody still using MySQL in 2015 to migrate to MariaDB for the following reasons:

1) MariaDB development is more open and vibrant

Unlike many other open source projects Oracle inherited from the Sun acquisition, Oracle does indeed still develop MySQL and to our knowledge they have even hired new competent developers after most of the original developers resigned. The next major release MySQL 5.7 will have significant improvement over MySQL 5.6. However, the commit log of 5.7 shows that all contributors are @oracle.com. Most commit messages reference issue numbers that are only in an internal tracker at Oracle and thus not open for public discussion. There are no new commits in the latest 3 months because Oracle seems to update the public code repository only in big batches post-release. This does not strike as a development effort that would benefit from the public feedback loop and the Linus law of “given enough eyes all bugs are shallow”.

MariaDB on the other hand is developed fully in the open: all development decisions can be reviewed and debated on a public mailing list of in the public bug tracker. Contributing to MariaDB with patches is easy and patch flow is transparent in the fully public and up-to-date code repository. The Github statistics for MySQL 5.7 show 24 contributors while the equivalent figure for MariaDB 10.1 is 44 contributors. But it is not just a question of code contributors – in our experience MariaDB seems more active also in documentation efforts, distribution packaging and other related things that are needed in day-to-day database administration.

Because of the big momentum MySQL has had, there is still a lot of community around it but there is a clear trend that most new activities in the open source world revolve around MariaDB.

As Linux distributions play a major role in software delivery, testing and quality assurance, the fact that the both RHEL 7 and SLES 12 ship with MariaDB instead of MySQL increases the likelihood that MariaDB is going to be better maintained both upstream and downstream in years to come.

2) Quicker and more transparent security releases

Oracle only has a policy to make security releases (and related announcements) every three months for all of their products. MySQL however has a new release every two months. Sometimes this leads situations where security upgrades and security information are not synced. Also the MySQL release notes do not list all the CVE identifiers the releases fix. Many have complained that the actual security announcements are very vague and do not identify the actual issues or the commits that fixed them, which makes it impossible to do backporting and patch management for those administrators that cannot always simply upgrade to the latest Oracle MySQL release.

MariaDB however follows good industry standards by releasing security announcements and upgrades at the same time and handling the pre-secrecy and post-transparency in a proper way. MariaDB release notes also list the CVE identifiers pedantically and they even seem to update the release notes afterwards if new CVE identifiers are created about issues that MariaDB has already released fixes for.

3) More cutting edge features

MySQL 5.7 is looking promising and it has some cool new features like GIS support. However, MariaDB has had much more new features in recent years and they are released earlier, and in most cases those features seem to go through a more extensive review before release. Therefore we at Seravo trust MariaDB to deliver us the best features and least bugs.

For example GIS features were introduced already in the 5.3 series of MariaDB, which makes storing coordinates and querying location data easy. Dynamic column support (MariaDB only) is interesting because it allows for NoSQL type functionality, and thus one single database interface can provide both SQL and “not only SQL” for diverse software project needs.

4) More storage engines

MariaDB in particular excels as the amount of storage engines and other plugins it ships with: Connect and Cassandra storage engines for NoSQL backends or rolling migrations from legacy databases, Spider for sharding, TokuDB with fractal indexes etc. These plugins are available for MySQL as well via 3rd parties, but in MariaDB they are part of the official release, which guarantees that the plugins are well integrated and easy to use.

5) Better performance

MariaDB claims it has a much improved query optimizer and many other performance related improvements. Certain benchmarks show that MariaDB is radically faster than MySQL. Benchmarks don’t however always directly translate to real life situations. For example when we at Seravo migrated from MySQL to MariaDB, we saw moderate 3-5 % performance improvements in our real-life scenarios. Still, when it all adds up, 5% is relevant in particular for web server backends, where every millisecond counts. Faster is always better, even if it is just a bit faster.

6) Galera active-active master clustering

Galera is a new kind of clustering engine which, unlike traditional MySQL master-slave replication, provides master-master replication and thus enables a new kind of scalability architecture for MySQL/MariaDB. Despite that Galera development already started in 2007, it has never been a part of the official Oracle MySQL version while both Percona and MariaDB flavors have shipped a Galera based cluster version for years.

Galera support will be even better in MariaDB 10.1, as it will be included in the main version (and not anymore in a separate cluster version) and enabling Galera clustering is just a matter of activating the correct configuration parameters in any MariaDB server installation.

7) Oracle stewardship is uncertain

Many people have expressed distrust in Oracle’s true motivations and interest in keeping MySQL alive. As explained in point 1, Oracle wasn’t initially allowed to acquire Sun Microsystems, which owned MySQL, due to the EU competition legislation. MySQL was the biggest competitor to Oracle’s original database. The European Commission however approved the deal after Oracle published an official promise to keep MySQL alive and competitive. That document included an expiry date, December 14th 2014, which has now passed. One can only guess what the Oracle upper management has in mind for the future of MySQL.

Some may argue that in recent years, Oracle has already weakened MySQL in subtle ways. Maybe, but in Oracle’s defense, it should be noted that MySQL activities have been much more successful than for example OpenOffice or Hudson, which both very quickly forked into LibreOffice and Jenkins with such a momentum, that the original projects dried up in less than a year.

However, given the choice between Oracle and a true open source project, the decision should not be hard for anybody who understands the value of software freedom and the evolutive benefits that stem from global collaborative development.

8) MariaDB has leapt in popularity

In 2013 there was news about Wikipedia migrating it’s enormous wiki system from MySQL to MariaDB and about Google using MariaDB in their internal systems instead of MySQL. One of the MariaDB Foundation sponsors is Automattic, the company behind WordPress.com. Other notable examples are booking.com and Craigslist. Fedora and OpenSUSE have had MariaDB as the default SQL database option for years. With the releases of Red Hat Enterprise Linux 7 and SUSE Enterprise Linux 12 both these vendors ship MariaDB instead of MySQL and promises to support their MariaDB versions for the lifetime of the major distribution releases, that is up to 13 years.

The last big distribution to get MariaDB was Debian (and based on it, Ubuntu). The “intent to package” bug in Debian was already filed in 2010 but it wasn’t until December 2013 that the bug finally got closed. This was thanks to Seravo staff who took care of packaging MariaDB 5.5 for Debian, from where it also got into Ubuntu 14.04. Later we have also packaged MariaDB 10.0, which will be included in the next Debian and Ubuntu releases in the first half of 2015.

9) Compatible and easy to migrate

MariaDB 5.5 is a complete drop-in-replacement for MySQL 5.5. Migrating to MariaDB is as easy as running apt-get install mariadb-server or the equivalent command on your chosen Linux flavor (which, in 2015, is likely to include MariaDB in the official repositories).

Despite the migration being easy, we still recommend that database admins undertake their own testing and always back up their databases, just to be safe.

10) Migration might become difficult after 2015

In versions MariaDB 10.0 and MySQL 5.6 the forks have already started to diverge somewhat but most likely users can still just upgrade from 5.6 to 10.0 without problems. The compatibility between 5.7 and 10.1 in the future is unknown, so the ideal time to migrate is now while it is still hassle-free. If binary incompatibilities arise in the future, database admins can always still migrate their data by dumping it and importing it in the new database.

With the above in mind, MariaDB is clearly our preferred option.

One of our customers once expressed their interest in migrating from MySQL to MariaDB and wanted us to confirm whether MariaDB is bug-free. Tragically we had to disappoint them with a negative answer. However we did assure them that the most important things are done correctly in MariaDB making it certainly worth migrating to.

Categories: FLOSS Project Planets

Convert TSV (Tab Separated Values) to PDF with xtopdf

Thu, 2015-01-08 17:46
By Vasudev Ram


I wrote this program, TSVToPDF.py, as a demo of how to convert TSV data to PDF, using my xtopdf toolkit.

TSV, which stands for Tab Separated Values, is a common data format. From the Wikipedia article linked in the previous sentence:

"TSV is a simple file format that is widely supported, so it is often used to move tabular data between different computer programs that support the format. For example, a TSV file might be used to transfer information from a database program to a spreadsheet.

TSV is an alternative to the common comma-separated values (CSV) format, which often causes difficulties because of the need to escape commas – literal commas are very common in text data, but literal tab stops are infrequent in running text. The IANA standard for TSV achieves simplicity by simply disallowing tabs within fields."

TSVToPDF.py uses the TSVReader module, for reading TSV data, and uses the PDFWriter module, for writing the PDF output. Both TSVReader.py and PDFWriter.py are part of my xtopdf toolkit for PDF creation in Python.

Here is TSVToPDF.py:
"""
TSVToPDF.py
A demo program to show how to convert TSV data to PDF,
where TSV stands for Tab Separated Values, a data format commonly
used on Unix and other operating systems.
Author: Vasudev Ram - http://www.dancingbison.com
Copyright 2015 Vasudev Ram
"""

import sys
from TSVReader import TSVReader
from PDFWriter import PDFWriter

def usage():
sys.stderr.write("Usage: python " + sys.argv[0] + " tsv_file pdf_filen")

def main():
# check for right # of args
if (len(sys.argv) != 3):
usage()
sys.exit(1)

# extract tsv and pdf filenames from args -
# using Python's parallel assignment
tsv_fn, pdf_fn = sys.argv[1:3]

# create and open the TSVReader instance
tr = TSVReader(tsv_fn)
tr.open()

# create the PDFWriter instance
# and set some of its fields:
pw = PDFWriter(pdf_fn)
pw.setFont("Courier", 10)
pw.setHeader("Conversion of TSV data to PDF: Input: " + tsv_fn)
pw.setFooter("Generated by xtopdf: http://slid.es/vasudevram/xtopdf")

sep = '=' * 68
pw.writeLine(sep)

# print the TSV data to PDF
rec_num = 0
try:
while True:
row = tr.next_row()
s = ""
for col in row:
s = s + col + " "
pw.writeLine(str(rec_num).rjust(5) + ": " + s)
rec_num += 1
except StopIteration:
pass

pw.writeLine(sep)
tr.close()
pw.savePage()

if __name__ == '__main__':
main()

# EOF

I ran the demo program like this:
python TSVtoPDF.py file1.tsv file1.pdf
where file1.tsv was a TSV file that I created for the purpose of testing.

And here is a screenshot of the output PDF file, in Foxit PDF Reader:


- Vasudev Ram - Dancing Bison Enterprises

Signup to hear about new products or services from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Aw Crap

Thu, 2015-01-08 13:29

Sometimes crap happens and it just so happened that today when I was doing an unattended update to wordpress and feedwordpress for LinuxPlanet Oggs and Casts something twigged and deleted all the feeds. If you have a feed in one of those places getting syndicated, well, now you don’t. It looks to me like there was a category shift and I do see what I assume to be all the former syndicated participants listed in the blogroll. I am going to attempt to go through there and re-add everyone, however, if you do NOT see your feed show back up within a couple days, or something seems wrong with your content, etc., then please shoot me an email and we’ll get things straightened back out.

–Linc.

Categories: FLOSS Project Planets

GC Log Viewer – An Web App to Graph JVM -X:loggc Log Files!

Thu, 2015-01-08 07:06

GCLogViewer is an online Java garbage collection log graphing application  (←click here) that runs entirely in the browser. It takes the log file generated by the "-X:loggc"  and produces a simple graph showing the JVM memory management and collections over the duration of the log file.

HTML/Javascript Application Written Using AngularJS

​The web application runs entirely in the browser with no information being uploaded or processed on the server. Log files are ingested into the application once uploaded and graphs can be generated from the loaded data. Any log files ingested are persisted between browser sessions so you can load daily or weekly files and compare.

The application was written partly to get a real world example of an HTML5/JavaScript application that is non-trivial and to provide our consultants with an easy to use, and readily available tool, to troubleshoot JVM performance issues. The application makes use of AngularJS, Bootstrap for styling and SVG for graphing.

GCLogViewer - JVM Performance Tuning

Now it is easy to get a quick overview of your applications memory management from the gc log files, rather than wading through lines of obscure and obfuscated output to try and get an idea of where the problem might be. One key graph we will be adding in future is the garbage collection pause times. The data is already parsed and loaded into IndexedDB it just needs to be added to the graph.

 

Categories: FLOSS Project Planets

Soylent 1.3 Review

Wed, 2015-01-07 16:52

In the next episode of Bad Voltage, I review Soylent 1.3. I typically post the review text after an episode comes out, but as I did with the Kindle Voyage Review I’m going to post it ahead of time. Why? Well, during the show myself and the rest of the Bad Voltage team discuss the review and after reading this I hope you’re interested enough to listen in when the show comes out tomorrow. In the mean time, you can listen to our holiday episode (where we discuss how we got into technology, where we think tech will be in 2024 and review our 2014 predictions) here: A Hannu-pancha-festi-christ-wanzaa-newton-vent Story

Soylent 1.3

For some, food and the act of eating are merely about sustenance. That mindset is antithetical to the way I approach gastronomy. That said, when Soylent hit the crowd funding scene, I was intrigued. And I wasn’t the only one. They had over $2M in pre-orders using Tilt and have since raised roughly 1.5M from venture capitalists.

So, what is Soylent? Unlike its eponymous plankton-colored movie nutrition source; it’s not people. It is a meal replacement drink that aims to be nutritionally complete, low cost, easy to prepare and flavor neutral. For those like Soylent’s creator who “resented the time, money, and effort the purchase, preparation, consumption, and clean-up of food was consuming”, it can be used in lieu of food for all three meals. During the initial formulation of the product he even subsided on nothing but Soylent for 30 days and has been living on a 90% Soylent diet ever since. For those who actually enjoy eating, it can also be used to replace individual meals at your discretion. It has a 50/30/20 ratio of carbohydrates, fats and protein and a 3 serving pouch contains 2,010 calories if you include the optional oil mixture. A 7 pouch box is $85 as a one time purchase with the starter kit or $70 as a monthly subscription.

I placed my order on July 1st and received it on December 15th. That’s correct it took 5 1/2 months. Unfortunately, based on shipping estimates currently on the website, things haven’t improved much since I placed my order. Do note that reorders should ship in 1-2 weeks, which is much more reasonable.

So, now that I actually have Soylent, what do I think? I should note here that radically altering your diet in the way Soylent’s creator has could have potentially serious health ramifications. Before you consume nothing but a nutrient slurry you heard about on Bad Voltage, created by someone you don’t know on the Internet, you should definitely do a copious amount of research and probably speak with a medical professional. Realistically I don’t think we’ll know the true long term implications of something like this anytime soon. With that out of the way, let me say that as a tech guy, I really like what they are doing. While they’re happy to sell you the product, there is a huge portion of the site dedicated to DYI that allows you to access and tweak their recipe to your liking and make it at home. This is not your average company. Additionally, they actually version the product and are iterating on it fairly quickly. The shipment I received was Soylent 1.3, which replaced the primary source of potassium, tweaked the flavor and changed packaging. Soylent 1.2 replaced fish oil with algae oil to make the product animal free and removed the enzyme blend added in a previous version while Soylent 1.1 reduced the amount of sucralose, added the aforementioned enzyme blend to improve digestion and updated the packaging. I don’t know of any other food vendor that details the changes in their product in this manner, but it’s a trend I welcome.

On to the actual product. The taste has been described as purposefully bland and that’s not far off. Opinions seem to very widely, but to me it has a very mild vanilla flavor. I didn’t use a blender for my initial tests and the product is slightly gritty, but certainly tolerable to me. Others I had taste Soylent did not concur with my assessment. Leaving Soylent in the refrigerator overnight helped the consistency immensely. I had almost nothing but Soylent for breakfast and lunch over the last two days which resulted in me feeling sated and having normal energy levels. I ran three miles before dinner yesterday and can’t say I noticed any difference between how I felt during that and a normal run. I had none of the gastric distress, intestinal discomfort or soul-crushing flatulence that has been reported by some.

So, what’s the Bad Voltage verdict? I can’t imagine consuming nothing but Soylent for 3 meals a day every day. I just like food too much. Even if I didn’t, I think the impact of cooking, eating and sharing food have a profound impact on local culture. One I’d hate to see go away. But the openness and transparency of the company, their willingness to iterate and the nutritional completeness along with ease of preparation does mean I’ll likely use it to replace breakfast and lunch a couple times a week moving forward. Now, is Soylent right for you? That’s too dependent on your gastronomic proclivities and intestinal fortitude for me to say.

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets

Teleborder facilitates labor immigration through technology

Tue, 2015-01-06 16:05
By Vasudev Ram

Seen via Hacker News.



Teleborder's goals seems interesting.

Teleborder is a startup that is trying "to bring free movement of labor to the world through technology. Right now, that means helping companies manage immigration, tax, and relocation for their expatriate employees. We grew our customer base 10x in 2014 and are looking to beat that in 2015. We're currently a tight-knit team of 13 and well funded by top tier investors, including YC (S13)."

The picture above is of a Chinese passport from the Qing Dynasty, 24th Year of the Guangxu Reign - 1898.
From Wikipedia.

- Vasudev Ram - Python programming and training

Signup to hear about new products or services from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Basic Docker Orchestration with Google Kubernetes on Fedora

Mon, 2015-01-05 09:48
Kubernetes is new framework by Google to manage Linux container clusters. I started playing with it today and it seems like a cool, powerful tool to manage a huge barrage of containers and to ensure that a predefined number of containers are always running. Installation and configuration on Fedora and many other distributions can be found at these Getting Started Guides. I recommend using two machines for this experiment (one physical and one VM is fine). Kubelet (or Minion) is the one where Docker containers will run, so use more powerful machine for that.

After the installation we'll see something like below when we look for minions from kube master:
master# kubectl get minions
NAME                LABELS
fed-minion          <none>

Now we would move to Kubernetes 101 Walkthrough where we will run a container using the yaml from the Intro section.
master# kubectl create -f kubeintro.yaml

.. except, (as on 25 Dec 2014) it won't run. It will give an error like this:
the provided version "v1beta1" and kind "" cannot be mapped to a supported object

Turns out that a field "kind" is empty. So the kubectl won't be able to run the container. Correct this so that kubeintro looks like this:

master# cat kubeintro.yaml
apiVersion: v1beta1
kind: Pod
id: www
desiredState:
  replicas: 2
  manifest:
    version: v1beta1
    id: www
    containers:
      - name: nginx
        image: dockerfile/nginx

Optional: Now, I do not exactly know what is there inside the image "dockerfile/nginx". So I would replace it with something that I want to spawn like "adimania/flask" image. The dockerfile for my flask image can be found in Fedora-Dockerfiles repo.

Once the kubeintro.yaml is fixed, we can run it on the master and we'll see that a container is started on the minion. We can stop the container on the minion using docker stop command and we'll see the kubernetes will start the container again.

The example above doesn't do much. We need to publish the ports of the container so that we can access the webpage served by it. Modify the kubeintro.yml to tell it to publish ports like this:

master# cat kubeintro.yaml
apiVersion: v1beta1
kind: Pod
id: www
desiredState:
  replicas: 2
  manifest:
    version: v1beta1
    id: www
    containers:
      - name: nginx
        image: dockerfile/nginx
        ports:
          - containerPort: 80
            hostPort: 8080

Now delete the older pod named www and start a new one from the new kubeintro.yaml file.
master# kubectl delete pod www
master# kubectl create -f kubeintro.yaml

We can browse via a browser to localhost:8080 and we'll see Nginx serving the default page. (If we would have used "adimania/flask" image, we would have seen "Hello from Fedora!" instead.)
Categories: FLOSS Project Planets

Updated Playlists, Tutorials, Linux Re-spins/Custom Distributions

Sun, 2015-01-04 05:11
Updated and added a few more playlists to my YouTube account.
  • Classical-19-Dec-14
  • Soundtrack-19-Dec-14
  • Opera-4-Dec-14 
  • Disco-4-Jan-15
https://www.youtube.com/channel/UCwVJG67iHHPbmBxuHVbyOlw/playlists
Some of the software that I use:
  • Ableton
  • reFX Nexus and associated libraries
  • Rob Papen's Synthesisers 
  • Native Instruments Komplete 8
  • Akai MPC Studio and associated libraries
  • Native Instruments Maschine
  • Sylenth1
  • U-He's Synthesisers
  • Cakewalk's Rapture
  • Best Service Engine2 and associated libraries
  • Spectrasonics Omnisphere, Trilogy, Trillian
  • Vir2 VI.One
Some of the hardware that I use:
  • Native Instruments Maschine
  • Korg Triton Taktile 25
  • Novation XioSynth25
  • Roland A-49
  • Audio Technica ATH-M50x
Some of the cheaper, smaller, hardware synthesiser options out there:
  • Korg MicroKorg, Triton Range
  • Novation's XioSynth, MiniNova, UltraNova Range
Most of them will have systemic problems associated with them  but a lot of the time they're easily fixable and service manuals are available online.https://www.gearslutz.com/board/geekslutz-forum/931707-microkorg-2-keys-dead-need-help-figure-out-whats-wrong.html
http://www.scribd.com/doc/127159097/Korg-microKORG-sm-pdf
What people are actually buying out there. http://www.ariacharts.com.au/chart/
http://www.billboard.com/chartshttp://www.vmusic.com.au/charts/http://www.ariacharts.com.au/about/chart-storesHow to build something simple has been bugging me for a while now.
http://electrictrumpet.com/SympleSynth/
https://www.facebook.com/ReaktorTutorials
http://www.reaktortips.com/
http://www.nireaktor.com//
http://www.native-instruments.com/forum/threads/7-hours-of-free-reaktor-tutorials.209915/
http://www.nireaktor.com/category/reaktor-tutorials/
https://www.youtube.com/watch?v=mtPuGwIq_DY
http://flipmu.com/work/software/
https://www.udemy.com/build-a-synth-in-reaktor/
http://en.wikibooks.org/wiki/Category:Reaktor
http://www.sonicacademy.com/Training+Videos/Course+Overview/Learn-How-To-Make-a-Synth-using-Native-Intruments-Reaktor.cid2095
http://adsrsounds.com/synth/reaktor/
http://en.wikibooks.org/wiki/Reaktor/Tutorials/Requests
http://proaudiozone.eu/education/groove3-reaktor-explained-tutorial/
http://createdigitalmusic.com/2006/09/learning-reaktor-online-tutorials-samples-discussion/ How to build something simple has been bugging me for a while now.http://www.sylenthtutorials.com/
http://www.absynthtutorials.com/
http://www.massivesynth.com/
http://www.deephousetutorials.com/
Fitness tests for entrace to the ADF. Interesting for those of you out there.http://www.defencejobs.gov.au/fitness/techniques/http://www.defencejobs.gov.au/recruitmentCentre/howToJoin/fitnessTest/
If you've ever used sampling software such as Akai MPC and like the functionality, simplicity of a single click to create a note inside the piano roll there's also the 'Ctl+B' shortcut inside of Ableton.
https://forum.ableton.com/viewtopic.php?f=1&t=171328&view=previous
https://www.ableton.com/en/manual/editing-midi-notes-and-velocities/
http://abletonlife.com/6-tips-for-more-efficient-midi-note-editing
http://www.musicradar.com/tuition/tech/28-ableton-live-tricks-you-didnt-know-229245/
Been re-thinking my strategy of selling MIDI clips and so on. Basically, I've been thinking of creating a library of small clips in Ableton and then using this as the basis for personal music and/or reselling this on to others. An example of this is using the 'Vinyl Scratches' VST plugin to learn how to create a group of unique clips for later reuse.
Have been involved (or have been researching) various forms of online money making in recent times (options for those tired of the standard office work work). These are some of them.
http://www.maven.co/
http://www.ideaken.com/
https://www.freelancer.com.au/
Thinking more about musical bridges in music. Think of note progression, more short notes into passage and slow out (particularly into 'breakdowns') , using effects as a means to keep things blended in.
Interesting for those 'vocalists' out there (it's a vocal sampler CD. A lot of websites also provide a lot of free or paid vocal samples on-line as well) http://www.cdsheetmusic.com/demo_order.php
http://www.cdsheetmusic.com/about.php
http://www.cdsheetmusic.com/products/products.php
If you are a vocalist, watch out for over eating. It can have some strange impacts on your body and voice.
http://www.webmd.com/heartburn-gerd/guide/what-is-acid-reflux-disease
A long time ago at University, I was playing around with automated deployment of software (reasonably sophisticated. Think about Windows automated deployment and so forth) and so on. Now there are many options out there including many distrobutions which are designed with customisation included. http://sourceforge.net/projects/manjaro-awesome-respin/
http://speedracke6.wix.com/jessere-spin
http://www.linux.com/learn/tutorials/739139-roll-your-own-customized-ubuntu-with-uck
http://blog.linuxmint.com/?p=2662
http://susestudio.com/
http://www.linuxfromscratch.org/
https://wiki.debian.org/DebianCustomCD
https://www.suse.com/products/suse-cloud/
http://aws.amazon.com/ec2/
http://windowsazure.com/
http://www.suse.com/products/susestudio/
http://unix.stackexchange.com/questions/34465/how-do-you-properly-fork-a-linux-distro
http://peppermintos.com/tag/respin/
http://www.linuxrespin.org/
http://linuxmint.tumblr.com/post/37500129564/how-to-remaster-respin-linux-mint-iso-images-with
Some interesting electronic artists.http://www.discogs.com/artist/19116-Kaskade
https://itunes.apple.com/us/artist/kaskade/id2827464
http://www.nytimes.com/2011/11/10/fashion/a-200000-a-night-dj-known-as-kaskade-is-really-ryan-raddon-a-mormon.html
http://en.wikipedia.org/wiki/Kaskade
https://soundcloud.com/groups/chill-lounge-and-deep-house
http://www.discogs.com/artist/44355-Ryan-Raddon
http://8tracks.com/explore/chill_house
http://en.wikipedia.org/wiki/Miguel_Migs
https://soundcloud.com/miguelmigs1
http://en.wikipedia.org/wiki/Ti%C3%ABsto
http://en.wikipedia.org/wiki/Bob_Sinclar
http://en.wikipedia.org/wiki/Dimitri_from_Paris
https://soundcloud.com/tiestohttps://soundcloud.com/alekseybeloozerov
Categories: FLOSS Project Planets

from pattern.web import Google; google.search()

Sat, 2015-01-03 18:08
By Vasudev Ram



$ pip install pattern
# test_pattern_google_search.py
from pattern.web import Google, plaintext

google = Google(language='en')
for result in google.search('"python"', cached=False):
try:
print unicode(plaintext(result.text))
except UnicodeEncodeError:
print "UnicodeEncodeError, skipping this tweet"
except Exception:
print "Exceptions happen"
$ python test_pattern_google_search.py >t
$ less t # more coffee
The official home of the Python Programming Language.
Python is a widely used general-purpose, high-level programming language. Its design philosophy emphasizes code readability, and its syntax allows ...
The original implementation of Python, written in C.
Learn to program in Python, a powerful language used by sites like YouTube and Dropbox.
Python is an easy to learn, powerful programming language. It has efficient high-level data structures and a simple but effective approach to object-oriented ...
Welcome to the 3rd Edition of Learn Python the Hard Way. You can visit the companion site to the book at http://learnpythonthehardway.org/ where you can ...
Python 3.4.2 documentation. Welcome! This is the documentation for Python 3.4.2, last updated Jan 01, 2015. Parts of the documentation: ...
Try Python in your browser ... Best way to learn Python for Raspberry Pi? ... Are there any python package that can intelligently parse strings containing numbers
...
Python 2.7.9 documentation. Welcome! This is the documentation for Python 2.7.9, last updated Dec 28, 2014. Parts of the documentation: ...
You can get xkcd shirts, prints, and posters in the store! Python ... Image URL (for

- Vasudev Ram - Dancing Bison Enterprises

Signup to hear about new products from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Scripts based on your network location

Sat, 2015-01-03 06:15

I recorded an episode of HPR about a script that I wrote to make my life a little easier. The show is hpr1654 :: Using AS numbers to identify where you are on the Internet if you want to listen along.

My “itch”

I have a laptop and I want it to use different configurations depending on where I am. If I’m on wifi at home, I don’t want my NAS mounted, but if I’m on a wired connection I do. If I’m at work I want to connect to various servers there. If I’m in the train I want to setup a vpn tunnel. You get the idea.

My solution to this was to approach it from the laptop and go out. So to look around and see what network I was on. There are a few ways to approach this, you could look at your IP address, the arp tables, try and ping a known server in each location. The issue with looking at an IP address is that most networks use Private Networks. Very soon you will find that the wifi coffee shop happens to have picked the same range as you use at home and now your laptop is trying to backup to their cash register.

To get around this I tried other solutions such as looking at the MAC address of the default gateway using IP Route and Arp, but that requires a lot of maintenance as devices change a lot.
$ arp -n | grep $(/sbin/ip route | awk '/default/ { print $3 }') | awk '{print $3}'
aa:bb:cc:dd:ee:ff

The next option was to try and ping known servers, but that resulted in a lot of delays as the pings will by definition need to time out, as you run down the list of possible places you are.

Then I was thinking that I’m approaching this problem from the wrong angle. Why not start with my public IP address range, which has to be unique, and work back from there to my laptop. There are a lot of services out there that provide look up services. Some I have used in the past are

Now even Google gives back your IP address if you type in “my ip address” into the search bar. Rather than using those services I just set up a small php file on my own server that returns the public IP address of your connection. So even if your home and coffee shop happen to have the same 192.168.1.0/24 range, they will have different public IP address ranges.

<?php
$ip = $_SERVER['REMOTE_ADDR'];
print "$ip";
?>

From there I was planning on maintaining a look-up table of public IP addresses, along the lines of the GeoIP tools developed by MaxMind. They provide the GeoLite Country and GeoLite City databases under a OPEN DATA LICENSE, which looks to me like a modified Apache License (IANAL). They provide a C library under the LGPL.

For those not familiar with Geolocation based on IP address, it’s the technology that maps your Public IP address to a physical location. This is what blocks the BBC iplayer website outside of the UK, or presents a cookie warning within the EU, or stops everyone else in the world watching US TV websites. For most applications the location is very coarse, based on information from the regional Internet registries. Once you get past country level you need to start investing serious money to get the data and so you can expect to pay for the more granular information.

The more detailed you get the more concerned you need to be about privacy. The location for most peoples home connection is mapped to the location of their Internet Providers head office. After checking my ip address location on http://www.iplocation.net/, of the six databases queried four put me in the head office of my ISP, one had the right town and another had me the other side of the country. So for a website that needs to perform an action based on the country of origin IP address it is quite useful but for my personal use case, it wasn’t going to help me a lot.

# geoiplookup 8.8.8.8
GeoIP Country Edition: US, United States

That was until I ran the exact same command on Fedora.

# geoiplookup 8.8.8.8
GeoIP Country Edition: US, United States
GeoIP ASNum Edition: AS15169 Google Inc.

The first line is the same but what’s this about ASNum ? It’s not mentioned in the man page, but suffice to say they are very, very important for how the Internet works.

From WikiPedia: Autonomous System (Internet)

ISP must have an officially registered autonomous system number (ASN). A unique ASN is allocated to each AS for use in BGP routing. AS numbers are important because the ASN uniquely identifies each network on the Internet.

So what that is saying is that every network in the Inter(connected)Net(work), must have it’s own unique AS Number. So my home ISP will have a different AS Number, from my local coffee shop, from my office network, etc. It actually goes even further than that. Say you have the same provider for your home Internet and mobile Internet. Even though they might be using the same 10.0.0.0/8 ranges for all their networks, they will more than likely route between the private networks using public IP Addresses, and that means different, unique AS Number. Your mileage may vary on this, but for me it works out very well indeed.

It’s already installed on Fedora (yum install GeoIP), so to install the application on Debian/Ubuntu type:
aptitude install geoip-bin

This will drop the IPv4 (GeoIP.dat)and IPv6 (GeoIPv6.dat) databases into the directory /usr/share/GeoIP/. Your package manager will not update the databases for you, although there is a Fedora package GeoIP-update* to schedule a cron job it only updates the GeoLiteCity.dat file. Here is the script I use to update all the databases:
# vi /usr/local/bin/geoip-update.bash

Paste in the following code:

#!/bin/bash for database in http://geolite.maxmind.com/download/geoip/database/GeoLiteCountry/GeoIP.dat.gz http://geolite.maxmind.com/download/geoip/database/GeoIPv6.dat.gz http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz http://geolite.maxmind.com/download/geoip/database/GeoLiteCityv6-beta/GeoLiteCityv6.dat.gz http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNum.dat.gz http://download.maxmind.com/download/geoip/database/asnum/GeoIPASNumv6.dat.gz do wget "$database" -O - | gunzip -c > /usr/share/GeoIP/$(basename "$database" .gz) done

Make the script executable
# chmod +x /usr/local/bin/geoip-update.bash

Then run it and check that you have new files in /usr/share/GeoIP to be sure it works. Finally all that’s left to do is to install it into cron. (Thanks James Wald)

# Minute Hour Day of Month Month Day of Week Command # (0-59) (0-23) (1-31) (1-12 or Jan-Dec) (0-6 or Sun-Sat) 0 12 * * Mon /usr/local/bin/geoip-update.bash > /tmp/geoip-update.bash 2>&1

I have modified my mapping script so that it combines the location and the connection type. It first does a quick check to see if there is an Internet connection and will time out after 2 seconds.

wget --timeout=2 http://www.example.com/uptime.txt -O -

I already have this file on my server for remote monitoring, so it makes sense to reuse it. The file contains the word “success” and if that is not returned then you don’t have any Internet connection. My server could also be down but that would be a bigger problem for me at least.

The next part gets the Public IP Address and then uses it to find the AS Number

geoiplookup "8.8.8.8" | awk '/GeoIP ASNum Edition/ {print $4}'

So that’s all I need to find out my position on the Internet, but I also want to know what type of connection I’m using. For example, when I use a usb network theathering connection to my phone it displays as a wired connection when in fact it should be “wireless”. Once I have found both the location and the connection type, I then combine them with an underscore, and use a case statement to run the different commands.

Here is a finalized script:

#!/bin/bash result="$(wget --timeout=2 http://www.example.com/uptime.txt -O - 2>/dev/null)" if [ "${result}" != "success" ] then echo "No connection to www.example.com found" exit 1 else myip="$(wget --timeout=2 http://www.example.com/whatismyip.php -O - 2>/dev/null)" asnum=$(geoiplookup "${myip}" | grep 'GeoIP ASNum Edition: ' | awk '{print $4}' ) case "${asnum}" in AS1234) location="home" ;; AS2222) location="work" ;; AS3333) location="roaming" ;; AS5555) location="roaming" ;; *) echo "No location found for AS Number \"${asnum}\"" exit 2 ;; esac interface=$(route | awk '/default/ {print $(NF)}') fi if [ "$( iwconfig 2>&1 ${interface} | grep 'ESSID' | wc -l )" -eq 1 ] || [ $(echo ${interface} | grep ppp | wc -l ) -eq 1 ] then type="wireless" essid=$( iwconfig 2>&1 ${interface} | awk -F '"' '{print $2}') else type="wired" fi echo "Connection Found: $myip $asnum $location $interface $type $essid" case "${location}_${type}" in work_wired) echo "Work Network" ;; work_wireless) echo "Work Wireless Network" ;; roaming_wireless) echo "Mobile Network" ;; home_wired) echo "Home Wired Network" ;; home_wireless) echo "Home Wireless Network" ;; *) echo "No custom configuration applied" ;; esac
Categories: FLOSS Project Planets

Weirdness with hplip package in Debian wheezy

Fri, 2015-01-02 22:20

I suspect this information is of limited use because it's far too vague. I didn't even file it as a Debian bug because I don't think I have enough information here to report a bug. It's not dissimilar from the issues reported in Debian bug 663868, but the system in question doesn't have foo2zjs installed. So, I filed Debian Bug 774460.

However, in searching around the Internet for the syslog messages below, I found very few results. So, in the interest of increasing the indexing on these error messages, I include the below: Jan 2 18:29:04 puggington kernel: [ 2822.256130] usb 2-1: new high-speed USB device number 16 using ehci_hcd Jan 2 18:29:04 puggington kernel: [ 2822.388961] usb 2-1: New USB device found, idVendor=03f0, idProduct=5417 Jan 2 18:29:04 puggington kernel: [ 2822.388970] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3 Jan 2 18:29:04 puggington kernel: [ 2822.388977] usb 2-1: Product: HP Color LaserJet CP2025dn Jan 2 18:29:04 puggington kernel: [ 2822.388983] usb 2-1: Manufacturer: Hewlett-Packard Jan 2 18:29:04 puggington kernel: [ 2822.388988] usb 2-1: SerialNumber: 00CNGS705379 Jan 2 18:29:04 puggington kernel: [ 2822.390346] usblp0: USB Bidirectional printer dev 16 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417 Jan 2 18:29:04 puggington udevd[25370]: missing file parameter for attr Jan 2 18:29:04 puggington mtp-probe: checking bus 2, device 16: "/sys/devices/pci0000:00/0000:00:1d.7/usb2/2-1" Jan 2 18:29:04 puggington mtp-probe: bus: 2, device: 16 was not an MTP device Jan 2 18:29:04 puggington hp-mkuri: io/hpmud/model.c 625: unable to find [s{product}] support-type in /usr/share/hplip/data/models/models.dat Jan 2 18:25:19 puggington kernel: [ 2596.528574] usblp0: removed Jan 2 18:25:19 puggington kernel: [ 2596.535273] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417 Jan 2 18:25:24 puggington kernel: [ 2601.727506] usblp0: removed Jan 2 18:25:24 puggington kernel: [ 2601.733244] usblp0: USB Bidirectional printer dev 12 if 0 alt 0 proto 2 vid 0x03F0 pid 0x5417 [last two repeat until unplugged]

I really think the problem relates specifically to hplip 3.12.6-3.1+deb7u1, as I said in the bug report, the following commands resolved the problem for me: # dpkg --purge hplip # dpkg --purge system-config-printer-udev # aptitude install system-config-printer-udev

Categories: FLOSS Project Planets

Happy New Year & Browser and OS stats for 2014

Fri, 2015-01-02 13:22

I’d like to wish everyone a happy new year on behalf of the entire LQ team. 2014 has been another great year for LQ and we have quite a few exciting developments in store for 2015, including a major code update that we originally had planned for 2013. A few highlights: LQ ISO recently surpassed 55,000,000 Linux downloads. AndroidQuestions.org and ChromeOSQuestions.org continue to grow. Outside The Questions Network, I think we’ve really hit our stride on Bad Voltage.

As has become tradition, here are the browser and OS statistics for the main LQ site for all of 2014 (2013 stats for comparison).

Browsers Chrome 45.34% Firefox 39.00% Internet Explorer 8.12% Safari 4.57% Opera 1.29% Android Browser 0.56%

A big change here, as Chrome has finally supplanted Firefox as the most used browser at LQ (and has done so quite handily).

Operating Systems Windows 52.58% Linux 32.32% Macintosh 10.62% Android 2.42% iOS 1.44%

Linux usage has remained fairly steady, while OS X usage is now over 10% for the first time ever.

I’d also like to take this time to thank each and every LQ member. You are what make the site great; without you, we simply wouldn’t exist. I’d like to once again thank the LQ mod team, whose continued dedication ensures that things run as smoothly as they do. Don’t forget to vote in the 2014 LinuxQuestions.org Members Choice Awards, which recently opened.

–jeremy


Categories: FLOSS Project Planets

Free parallel programming webinar (Python and R) by Domino Data Labs

Tue, 2014-12-30 17:07

By Vasudev Ram





I had blogged a few times earlier about Domino Data Lab, which is the maker of Domino, a Python PaaS (Platform as a Service) for data science (though it can also be used for general programming in the cloud). I had done a trial of it and found it to be quite good and easy to use. In fact Domino's ease of use for cloud programming was one of the points I specifically noticed and commented on, after trying it out.

Here is the last of those posts:

Domino Python PaaS now has a free plan

That post links to my earlier posts about Domino.



Today I got to know that they are hosting a free webinar on parallel programming with Python and R, using Domino. Here are the details:

[

Free webinar on parallel programming in R and Python

We'll show you how to utilize multi-core, high-memory machines to dramatically accelerate your computations in R and Python, without any complex or time-consuming setup.

You'll learn:

How to determine whether your tasks can be parallelized on multi-core, high-memory machines

General purpose techniques in R and Python for parallel programming

Specific applications of parallel programming in a machine learning context, including how to speed up cross-validation, grid search, and random forest calculations

Finally, how to use Domino for easy access to powerful multi-core machines where you can utilize these techniques.

About the instructor

The webinar will be led by Nick Elprin, one of Domino’s co-founders. Before starting Domino, Nick was a senior technologist and technology manager at a large hedge fund, where he managed a team that designed, developed, and delivered the firm’s next generation research platform. He has a BA and MS in computer science from Harvard.

]

You can sign up for the webinar here:

Domino Data Lab: free webinar on parallel programming in Python and R

- Vasudev Ram - Python training and consulting - Dancing Bison Enterprises

Signup to hear about new products or services from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

Building Self-Healing Applications with Saltstack

Tue, 2014-12-30 09:50

Self-healing infrastructure is something that has always piqued my interest. The first iteration of self healing infrastructure that I came across was the Solaris Service Management Facility aka "SMF". SMF would restart services if they crashed due to hardware errors or general errors outside of the service itself.

For today's article we are going to explore another way of creating a self healing environment; going beyond restarting failed services. In today's article we are going to take a snippet of code that connects to a database service and give that application not only the ability to reconnect during database failure but also give it the ability to automatically resolve the database issues.

Starting with a simple connection

For today's article we are going to take a snippet of code from an existing applicaton and give it self healing super powers. The code we are using is from Runbook a side project of mine that does all sorts of cool automation for DevOps.

# RethinkDB Server try: rdb_server = r.connect( host=config['rethink_host'], port=config['rethink_port'], auth_key=config['rethink_authkey'], db=config['rethink_db']) print("Connected to RethinkDB") except (RqlDriverError, socket.error) as e: print("Cannot connect to rethinkdb, shutting down") print("RethinkDB Error: %s") % e.message sys.exit(1)

This code has been altered a bit for simplification.

The code above will attempt to connect to a RethinkDB instance. If successful it creates a connection object rdb_server which can be used later for running queries against the database. If the connection is not successful the application will log an error and exit with an exit code of 1.

To put it simply, if RethinkDB is down or not accepting connections this process stops.

Let's try again

Before we start adding super powers we need to change how the application handles connection errors. Right now it simply exits the process and unless we have external systems restarting the process it never attempts to reconnect. For a self healing application we should change this behavior to have the application reattempt connections until RethinkDB is online.

# Set initial values connection_attempts = 0 connected = False # Retry RethinkDB Connections until successful while connected == False: # RethinkDB Server try: rdb_server = r.connect( host=config['rethink_host'], port=config['rethink_port'], auth_key=config['rethink_authkey'], db=config['rethink_db']) connected = True print("Connected to RethinkDB") except (RqlDriverError, socket.error) as e: print("Cannot connect to rethinkdb") print("RethinkDB Error: %s") % e.message connection_attempts = connection_attempts + 1 print("RethinkDB connection attempts: %d") % connection_attempts

If we breakdown the above code we can see that we added two new variables and a while loop. The above code will simply retry connecting to RethinkDB until successful. In some ways this in itself is making the application self healing, as it is gracefully handling an error with an external system and keeps trying to reconnect. These however are not the super powers I was referring to.

Giving our application superpowers via Saltstack

In an earlier article I covered implementing salt-api the API for Saltstack. While that article covered utilizing salt-api with third party services such as Runbook or Datadog; that same level of integration could be added to applications themselves. Giving those applications the ability to run infrastructure tasks.

Using Salt-API and Reactor Formula

For sake of brevity this article will assume that you already have Saltstack and salt-api installed and configured to accept webhook requests as outlined in the previous article. For this article we will also be utilizing a salt-api and reactor formula that I created for Runbook.

This formula provides several template reactor configurations that can be used to pickup salt-api webhook requests and perform salt actions. Actions such as restarting services, executing shell commands, or even start a highstate. To get started we will first need to download and extract the formula.

# wget -O /var/tmp/master.zip https://github.com/madflojo/salt-api-reactor-formula/archive/master.zip # cd /var/tmp/ # unzip master.zip

Once extracted we can copy the reactor directory to /srv/salt/, this is the default salt directory and may need to be updated for your environment.

# cp -R salt-api-reactor-formula-master/reactor /srv/salt/

We will also need to deploy our reactor config to the /etc/salt/master.d/ directory as this is what maps the URL endpoint to a specific salt action. Once deployed we will also need to restart the salt-master service.

# cp salt-api-reactor-formula-master/reactor.conf /etc/salt/master.d/ # service salt-master restart Examining a reactor configuration

When our application is unable to connect to RethinkDB we want to perform some sort of corrective task. The easiest and safest thing to do in Runbook's environment is to simply run a salt highstate. A highstate execution will tell Saltstack to go through all of the defined configurations and make them true on the desired minion server. In our environment that includes ensuring the RethinkDB service is running and configured.

If our application is able to call a highstate execution on the database hosts there is a good chance that the issue may be corrected. Giving our application the ability to resolve any issue that was caused by RethinkDB not matching our desired state.

highstate.sls

In order to give our application the ability to run a highstate we will utilize the reactor/states/highstate.sls formula. Before going further we should first examine how this formula works.

{% set postdata = data.get('post', {}) %} {% if postdata.secretkey == "PICKSOMETHINGBETTERPLZKTHX" %} state_highstate: cmd.state.highstate: - tgt: '{{ postdata.tgt }}' {% if "matcher" in postdata %} - expr_form: {{ postdata.matcher }} {% endif %} {% if "args" in postdata %} - arg: - {{ postdata.args }} {% endif %} {% endif %}

When a POST request is made to the http://saltapiurl/webhooks/states/highstate address salt-api will take the POST data of that request and pass it along salts event system. When processed this reactor configuration will take the POST data and assign it to a dictionary named postdata. From there salt will check for a key in the postdata dictionary named secretkey and ensure that the value of that key matches the defined "secretkey" in the template. This section is used to act as an authentication method for webhooks.

Each reactor template has an example secret key defined, it is recommended that you modify this to a unique value for your environment.

After validation salt will look for additional keys in the postdata dictionary, for our purpose we will need to understand the tgt and matcher keys. The tgt key is used to specify the "target" for the highstate execution. This target can be a hostname, a grain value, pillar value, subnet or any other target Saltstack accepts. The matcher key contains a definition of the tgt keys expression, for instance if the tgt value was a hostname, the matcher value should be glob for a hostname glob. If the tgt value was a pillar value, the matcher value should be pillar. You can find all of the valid matcher values in salt-api's documentation.

Calling salt-api

Now that we have salt-api configured to accept webhook requests and start highstate executions, we now need to code our application to call those webhooks. Since this is something we may want to do somewhat often in our code we can create a function to perform this webhook request.

Highstate Function def callSaltHighstate(config): ''' Call Saltstack to initiate a highstate ''' import requests url = config['salt_url'] + "/states/highstate" headers = { "Accept:" : "application/json" } postdata = { "tgt" : "db*", "matcher" : "glob", "secretkey" : config['salt_key'] } try: req = requests.post(url=url, headers=headers, data=postdata, verify=False) print("Called for help and got response code: %d") % req.status_code if req.status_code == 200: return True else: return False except (requests.exceptions.RequestException) as e: print("Error calling for help: %s") % e.message return False

The code above is pretty simple, it essentially performs an HTTP POST request with POST data fields tgt, matcher and secretkey. The tgt field contains db* which in our field is a hostname glob that matches our database server names. The matcher value is glob to denote that the tgt value is a hostname glob value. The secretkey actually contains the value of config['salt_key'] which is pulled from our configuration file when the main process starts and is passed to the callSaltHighstate() function.

Now that the code to call salt-api is defined we can add the callSaltHighstate() function into the exception handling for RethinkDB.

Adding callSaltHighstate as an action # Set initial values connection_attempts = 0 connected = False # Retry RethinkDB Connections until successful while connected == False: # RethinkDB Server try: rdb_server = r.connect( host=config['rethink_host'], port=config['rethink_port'], auth_key=config['rethink_authkey'], db=config['rethink_db']) connected = True print("Connected to RethinkDB") except (RqlDriverError, socket.error) as e: print("Cannot connect to rethinkdb") print("RethinkDB Error: %s") % e.message callSaltHighstate(config) connection_attempts = connection_attempts + 1 print("RethinkDB connection attempts: %d") % connection_attempts

As you can see the code above hasn't changed much from the previous example. The biggest change is that after printing the RethinkDB error we experienced we then execute the callSaltHighstate() function.

Leveling up

For a simple example the above code works quite well, however there is a bit of a flaw. With the above code a highstate will be called every time the application attempts to connect to RethinkDB and fails. Since a highstate will take a bit of time to execute this could cause a backlog of highstate executions which could in theory cause even more issues.

To combat this at the end of the while loop you could add a time.sleep(120) to cause the application to sleep for 120 seconds between each while loop executions. This would give Saltstack some time to execute the highstate before another is queued. While a sleep would work and is simple, it is not the most elegant method.

Since we can call Saltstack to perform essentially any task Saltstack can perform. Why stop at just a highstate? Below we are going to create another function that calls salt-api, but rather than run a highstate this function will send a webhook request that tells salt-api to restart the RethinkDB service.

Restart function def callSaltRestart(config): ''' Call Saltstack to restart a service ''' import requests url = config['salt_url'] + "/services/restart" headers = { "Accept:" : "application/json" } postdata = { "tgt" : "db*", "matcher" : "glob", "args" : "rethinkdb", "secretkey" : config['salt_key'] } try: req = requests.post(url=url, headers=headers, data=postdata, verify=False) print("Called for help and got response code: %d") % req.status_code if req.status_code == 200: return True else: return False except (requests.exceptions.RequestException) as e: print("Error calling for help: %s") % e.message return False

The above code is very similar to the highstate function with the exception that the URL endpoint has changed to /services/restart (which utilizes the reactor/services/restart.sls template) and there is a new POST data key called args which contains rethinkdb the service in which we want to restart.

Since we are adding the complexity of restarting the RethinkDB service we want to make sure that this call is not made too often. At the moment the best way to do this is to build that logic into the application itself.

Extending when to call salt-api # Set initial values connection_attempts = 0 first_connect = 0.00 last_restart = 0.00 last_highstate = 0.00 connected = False called = None # Retry RethinkDB Connections until successful while connected == False: if first_connect == 0.00: first_connect = time.time() # RethinkDB Server try: rdb_server = r.connect( host=config['rethink_host'], port=config['rethink_port'], auth_key=config['rethink_authkey'], db=config['rethink_db']) connected = True print("Connected to RethinkDB") except (RqlDriverError, socket.error) as e: print("Cannot connect to rethinkdb") print("RethinkDB Error: %s") % e.message timediff = time.time() - first_connect if timediff > 300.00: last_timediff = time.time() - last_restart if last_timediff > 600.00 or last_restart == 0.00: if timediff > 600: callSaltRestart(config) last_restart = time.time() last_timediff = time.time() - last_highstate if last_timediff > 300.00 or last_highstate == 0.00: callSaltHighstate(config) last_highstate = time.time() connection_attempts = connection_attempts + 1 print("RethinkDB connection attempts: %d") % connection_attempts time.sleep(60)

As you can see with the above code we added quite a bit of logic around when to run and when not to run. With the above code, when our application is unable to connect to RethinkDB it will keep retrying until successful, just as before. However, every 5 minutes if the application is unable to connect to RethinkDB it will call Saltstack via salt-api requesting a highstate be executed on the database servers. Every 10 minutes, if RethinkDB is still not accessible this application will call Saltstack via salt-api requesting a restart of the RethinkDB service on all database servers.

Improvements

With today's example we are able to correct situations that many applications cannot. Being able to restart the database when you are unable to connect to it is a good example of a self healing environment. However, there are more things that could be done to this application.

This same type of logic could be built into Query exceptions rather than Connection exceptions only. With query exceptions you could also use salt-api to execute database maintenance scripts or call salt-cloud to provision additional servers. Once you give your application the ability to perform infrastructure wide actions you open the door to a wide range of automation capabilities.

To see the full script from this example you can view it on this GitHub Gist


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

pyDAL, a pure Python Database Abstraction Layer

Mon, 2014-12-29 18:10

By Vasudev Ram


pyDAL is a pure Python Database Abstraction Layer. So it seems to be something like the lower layer of SQLAlchemy, i.e. SQLAlchemy Core, the library that is used by the upper layer, SQLAlchemy ORM. See the SQLAlchemy (0.8) documentation.

From the pyDAL site:

[ It dynamically generates the SQL in real time using the specified dialect for the database back end, so that you do not have to write SQL code or learn different SQL dialects (the term SQL is used generically), and your code will be portable among different types of databases.

pyDAL comes from the original web2py's DAL, with the aim of being wide-compatible. pyDAL doesn't require web2py and can be used in any Python context. ]

IOW, pyDAL has been separated out into a different project from web2py, a Python web framework, of which it was originally a part.

The use of an ORM (Object Relational Mapper) vs. writing plain SQL code (vs. using an intermediate option like pyDAL or SQLAlchemy Core), can be controversial; there are at least some pros and cons on both (or all 3) sides. I've read some about this, and have got some experience with using some of these options in different projects, but am not an expert on which is the best approach, and also, it can vary depending on your project's needs, so I'm not getting into that topic in this post.

pyDAL seems to support many popular databases, mostly SQL ones, but also a NoSQL one or two, and even IMAP. Here is a list, from the site: SQLite, PostgreSQL, MySQL, Oracle, MSSQL, FireBird, DB2, Informix, Ingres, Cubrid, Sybase, Teradata, SAPDB, MongoDB, IMAP.

For some of those databases, it uses PyMySQL, pyodbc or fbd, which are all Python database libraries that I had blogged about earlier.

I tried out pyDAL a little, with this simple program, adapted from its documentation:
import sys
import time
from pydal import DAL, Field
db = DAL('sqlite://storage.db')
db.define_table('product', Field('name'))
t1 = time.time()
num_rows = int(sys.argv[1])
for product_number in range(num_rows):
db.product.insert(name='Product-'.format(str(product_number).zfill(4)))
t2 = time.time()
print "time to insert {} rows = {} seconds".format(num_rows, int(t2 - t1))
query = db.product.name
t1 = time.time()
rows = db(query).select()
for idx, row in enumerate(rows):
#print idx, row.name
pass
t2 = time.time()
print "time to select {} rows = {} seconds".format(num_rows, int(t2 - t1))

It worked, and gave this output:

$ python test_pydal2.py 100000
No handlers could be found for logger "web2py"
time to insert 100000 rows = 18 seconds
time to select 100000 rows = 7 seconds

Note: I first ran it with this statement uncommented:
#print idx, row.name
to confirm that it did select the records, and then commented it and replaced it with "pass" in order to time the select without the overhead of displaying the records to the screen.

I'll check out pyDAL some more, for other commonly needed database operations, and may write about it here.
There may be a way to disable that message about a logger.

The timing statements in the code and the time output can be ignored for now, since they are not meaningful without doing a comparison against the same operations done without pyDAL (i.e. just using SQL from Python with the DB API). I will do a comparison later on and blog about it if anything interesting is found.

- Vasudev Ram - Dancing Bison Enterprises - Python training and consulting

Signup to hear about new products or services from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

New Skype Translator service

Sun, 2014-12-28 17:33
http://m.bbc.com/news/technology-30539198Vasudev Ram
Categories: FLOSS Project Planets

Popcorn, black olives and turnip pickle – a nice evening snack

Sat, 2014-12-27 14:21

By Vasudev Ram




I had this snack today:

Bought some popcorn and black olives (Figaro olives from Seville, Spain), and had them with some turnip pickle that I had made. Crazy, no? :) Turned out to be a good snack.

The turnip pickle was a simple one with salt and a couple of powdered spices - no oil.

As I sometimes do when I eat something, I googled olive and was interested to see that it is one of the most extensively cultivated fruit crops, more than apples, mangoes and bananas, according to Wikipedia:

[ Olives are one of the most extensively cultivated fruit crops in the world.[69] In 2011 there were about 9.6 million hectares planted with olive trees, which is more than twice the amount of land devoted to apples, bananas or mangoes. Only coconut trees and oil palms command more space. ]


- Vasudev Ram - Dancing Bison Enterprises

Contact Page

Share | var addthis_config = {"data_track_clickback":true};
Vasudev Ram

Categories: FLOSS Project Planets