LinuxPlanet

Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 6 hours 47 min ago

[Video] [HowTo] CentOS / RHEL 7 Installation with GUI / Custom Partition

Thu, 2014-07-24 23:00
Hello, Today I have tried to Install CentOS 7 on my Virtual Machine. Sharing video of CentOS 7 Installation with GUI and Custom Partition with LVM. Hope this video will help to those who wants to Install and Looking for easy guide for CentOS 7 Installation. CentOS / RHEL 7 Installation Video Enjoy CentOS 7 […]
Categories: FLOSS Project Planets

RHEL 7 / CentOS 7 : How to get started with Systemd

Tue, 2014-07-22 13:15
Hello All, Today I was trying to learn and know about Systemd. I have found one of the great Article, Sharing with you guys, It will help you to understand this biggest and major change in RHEL and CentOS 7. This article is not mine, I found on internet and felt that this is wonderful […]
Categories: FLOSS Project Planets

Setting the default document folder/directory in the KDE Kate Text Editor

Tue, 2014-07-22 11:54

By default Kate is trying to open and save files starting in the ~/Documents folder. While this may be convenient for some, I’d like to change it. Try as I might I couldn’t find this option in the configuration of Kate. I remember this was an option, and it was removed. Then it was a command line option but kate –help-all shows it as been removed.

The only I found to change this is via KDE > System Settings > Account Details > Paths > Document Path:

Changing that to your desired directory works fine.

Categories: FLOSS Project Planets

Speeding up Speech with mplayer

Tue, 2014-07-22 04:23

Mplayer is a fantastic media player and I have been using it as the default tool to play both music and speech for years now.

One of it’s lesser known features is the ability to speed up or slow down whatever it’s playing. Not very useful for music but very handy if you are listening to speech. In some cases you may wish to speed up podcasts, to get more enjoyment in. In other cases you may want to slow down a recording so that you can transcribe the text. For people with Dyslexia this is very empowering as it gives them control over the rate of input.

You can use the {, [, backspace, ], }, keys to control the speed.

  • { key will slow down by 50% of the current rate
  • [ key will slow down by 10% of the current rate
  • Backspace will return the speed to normal
  • ] key will speed up 10% of the current rate
  • } key will speed up by 50% of the current rate
  • 9 key will decrease the volume
  • 0 key will increase the volume

I strongly recommend taking some time to review the keyboard controls in the manpage.

By default mplayer will not maintain pitch when you change the speed. So if you speed it up the speaker starts to sound like a chipmunk, and if you slow it down female voices start to sound like male voices.

You can change this by starting mplayer with the switch -af scaletempo

You can change this quickly by creating an alias

alias mplayer='mplayer -af scaletempo'

A more permanent way to set this is to configure your mplayer configuration file. Simply add the following in the “# audio settings #” section

af=scaletempo

See the Configuration Files section in the man page for more information.

The system-wide configuration file ‘mplayer.conf’ is in your configuration directory (e.g. /etc/mplayer or /usr/local/etc/mplayer), the user specific one is ~/.mplayer/config. User specific options override system-wide options and options given on the command line override either.

Categories: FLOSS Project Planets

My New Page

Mon, 2014-07-21 07:54

This is my new page

Categories: FLOSS Project Planets

Carving text on image using gimp

Sat, 2014-07-19 06:36
We often want our images to have some text carved on it. Here is how we can do it easily using gimp.

Launch gimp and click on File->Create->Logos->Carved



The following menu will appear.



In the text field enter the text that has to be carved on the image.

Font Size: The size of the font to be engraved. Select this carefully depending on the size of the image.
Font: The fonr to be used for the characters of the text.
Background Image: Browse and select the image on which the text need to be carved.
Carve raised text: We can tick this check box when we want the text to stand out in a raised manner from the image.
Padding around text: How much spacing should be between the image borders and the text.
For our example we will write the word Linux on the image of the "tux", the linux penguin.
After setting all the above fields click on ok.

The gimp screen will come up with the carved text. The default color of the text is almost white as shown below.

The color can be changed by using the bucket tool available in the toolbox and filling the characters with our desired color by selecting the top most layer among all the layers.



After coloring we will have to export the image using File->Export and choose the path and name for the final file. The following is the final result after coloring the text red.



Note that the image being small, it has been repeated to accommodate the whole text.
Categories: FLOSS Project Planets

Checkpont SNX on Ubuntu 14.04 LTS (Trusty Tahr)

Fri, 2014-07-18 11:10

Life has conspired to bring me back to the open arms of Kubuntu and with a new install comes the required update on getting Checkpont Firewall AKA SNX working. This is part of the snx series here.


The first step remains the same and is to get your username, password and ip address or host name of your snx server from your local administrator. Once you do that you can login and then press the settings link. This will give you a link to the various different clients. In our case we are looking for the “Download installation for Linux” link. Download that and then run it with the following command.

# sh +x snx_install.sh Installation successfull

If you run this now you will get the error

snx: error while loading shared libraries: libpam.so.0: cannot open shared object file: No such file or directory

We can check if the required libraries are loaded.

# ldd /usr/bin/snx | grep "not found" libpam.so.0 => not found libstdc++.so.5 => not found

This is the 64 bit version and I’m installing a 32 bit application, so you’ll need to install the 32 bit libraries and the older version of libstdc if you haven’t all ready. The old trick of simply installing ia32-libs will no longer work since MultArch support has been added. Now the command is simply

apt-get install libstdc++5:i386 libpam0g:i386

You should now be able to type snx without errors. You only now need to accept the VPN Certificate by loging in via the command line and press “Y”.

user@pc:~$ snx -s my-checkpoint-server -u username Check Point's Linux SNX build 800007075 Please enter your password: SNX authentication: Please confirm the connection to gateway: my-checkpoint-server VPN Certificate Root CA fingerprint: AAAA BBB CCCC DDD EEEE FFF GGGG HHH IIII JJJ KKKK Do you accept? [y]es/[N]o:

Note the build number of 800007075. I had difficulties connecting with any other version lower than this.

Categories: FLOSS Project Planets

Creating a queue in linux kernel using list functions

Thu, 2014-07-17 02:23
In the post we saw how to create a stack using the list functions. In this post let us use the same list functions to implement a queue.

As we know a queue is a data structure that stores data in the first in first out order. That is the data that is entered first is read out first.

To create a queue using the list functions we can make use of the function list_add_tail

list_add_tail(struct list_head *new, struct list_head *head)

new: The new node to be added
head: Pointer to node before which the new node has to be added.

To make the list work as a queue we need keep inserting every new node before the head node, that is at the tail end of the list and when we read the list, we read from the front end. Thus the entry that is added first will be accessed first.

To demonstrate the operation of queue we will create a proc entry called as queue make it work like a queue.

To create the queue we will use the structure

struct k_list { struct list_head queue_list; char *data; };

queue_list : Used to create the linked list using the list functions.
data: Will hold the data to be stored.

In the init function we need to first create the proc entry and then intialize the linked list that we will be using for the queue.

int queue_init (void) { create_new_proc_entry(); head=kmalloc(sizeof(struct list_head *),GFP_KERNEL); INIT_LIST_HEAD(head); emp_len=strlen(empty); return 0; }

create_new_proc_entry: Function for creation of the proc entry.

void create_new_proc_entry(void) { proc_create("queue",0,NULL,&proc_fops); }

INIT_LIST_HEAD(head) : Initalizing the linked head pointer of the linked list.

Next we need to define the file operations for the proc entry. To manipulate the queue we need two operations push and pop.

struct file_operations proc_fops = { read: pop_queue, write: push_queue };

The function push_queue will be called when data is written into the queue and pop_queue will be called when data is read from the queue.

In push_queue we will use the function list_add_tail to add every new node behind the head.

int push_queue(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add_tail(&node->queue_list,head); return count; }

In the function pop_queue, before attempting to read the queue, we need to ensure that the queue has data. This can be done using the function list_empty.

int list_empty(const struct list_head *head)

Which returns true if the list is empty.

In case the list is not empty, we need to pop out the first data from the list which can be done using the function list_first_entry.

list_first_entry(ptr, type, member)

ptr: Pointer to the head node of the list.
type: type of structure of which the head is member of
member: The name of the head node in the structre.
Thus the pop function will look as below. int pop_queue(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(list_empty(head)){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=list_first_entry(head,struct k_list ,queue_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { list_del(&node->queue_list); new_node=1; } return count; }

The flag and the new_node are variables used to ensure that only one node is returned on every read of hte proc entry. The read function from user space will continue to read the proc entry as long as the return value is not zero. In the first pass,new_node=0, we will return the number of bytes of data transfered from one node and in the second pass,new_node=0, we will return 0 to terminate the read. Thus making sure that on every read data of exactly one node is returned. The flag variable works in the similar fashion when the queue is empty to return the string "Queue empty".

Thus the full code of queue will be

#include #include #include #include#include #include int len,temp,i=0,ret; char *empty="Queue Empty \n"; int emp_len,flag=1; struct k_list { struct list_head queue_list; char *data; }; static struct k_list *node; struct list_head *head; struct list_head test_head; int new_node=1; char *msg; int pop_queue(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(list_empty(head)){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=list_first_entry(head,struct k_list ,queue_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { list_del(&node->queue_list); new_node=1; } return count; } int push_queue(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add_tail(&node->queue_list,head); return count; } struct file_operations proc_fops = { read: pop_queue, write: push_queue }; void create_new_proc_entry(void) { proc_create("queue",0,NULL,&proc_fops); } int queue_init (void) { create_new_proc_entry(); head=kmalloc(sizeof(struct list_head *),GFP_KERNEL); INIT_LIST_HEAD(head); emp_len=strlen(empty); return 0; } void queue_cleanup(void) { remove_proc_entry("queue",NULL); } MODULE_LICENSE("GPL"); module_init(queue_init); module_exit(queue_cleanup);

To compile the module use the make file.

ifneq ($(KERNELRELEASE),) obj-m := queue_list.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KERNELDIR) M=$(PWD) modules clean: $(MAKE) -C $(KERNELDIR) M=$(PWD) clean endif

Compile and insert the module into the kernel

$ make $ insmod stack_list.ko

To see the output

$ echo "1" > /proc/queue $ echo "2" > /proc/queue $ echo "3" > /proc/queue $ echo "4" > /proc/queue Thus we have written the 1,2,3,4 as data into 4 nodes with 4 being inserted last. Now to pop data out of the stack $ cat /proc/queue 1 $ cat /proc/queue 2 $ cat /proc/queue 3 $ cat /proc/queue 4 $ cat /proc/queue Queue Empty
Categories: FLOSS Project Planets

Using salt-api to integrate SaltStack with other services

Wed, 2014-07-16 19:00

Recently I have been looking for ways to allow external tools and services to perform corrective actions across my infrastructure automatically. As an example, I want to allow a monitoring tool to monitor my nginx availability and if for whatever reason nginx is down I want that monitoring tool/service to do something to fix it.

While I was looking at how to implement this, I remembered that SaltStack has an API and that API can provide exactly the functionality I wanted. The below article will walk you through setting up salt-api and configuring it to allow third party services to initiate SaltStack executions.

Install salt-api

The first step to getting started with salt-api is to install it. In this article we will be installing salt-api on a single master server, it is possible to run salt-api on a multi-master setup however you will need to ensure the configuration is consistent across master servers.

Installation of the salt-api package is pretty easy and can be performed with apt-get on Debian/Ubuntu or yum on Red Hat variants.

# apt-get install salt-api Generate a self signed certificate

By default salt-api utilizes HTTPS and while it is possible to disable this, it is generally not a good idea. In this article we will be utilizing HTTPS with a self signed certificate.

Depending on your intended usage of salt-api you may want to Generate a certificate that is signed by a certificate authority, if you are using salt-api internally only a self signed certificate should be acceptable.

Generate the key

The first step in creating any SSL certificate is to generate a key, the below command will generate a 4096 bit key.

# openssl genrsa -out /etc/ssl/private/key.pem 4096 Sign the key and generate a certificate

After we have generated the key we can generate the certificate with the following command.

# openssl req -new -x509 -key /etc/ssl/private/key.pem -out /etc/ssl/private/cert.pem -days 1826

The -days option allows you to specify how many days this certificate is valid, the above command should generate a certificate that is valid for 5 years.

Create the rest_cherrypy configuration

Now that we have installed salt-api and generated an SSL certificate we can start configuring the salt-api service. In this article we will be placing the salt-api configuration into /etc/salt/master.d/salt-api.conf you can also put this configuration directly in the /etc/salt/master configuration file; however, I prefer putting the configuration in master.d as I believe it allows for easier upkeep and better organization.

# vi /etc/salt/master.d/salt-api.conf Basic salt-api configuration

Insert

rest_cherrypy: port: 8080 host: 10.0.0.2 ssl_crt: /etc/ssl/private/cert.pem ssl_key: /etc/ssl/private/key.pem

The above configuration enables a very basic salt-api instance, which utilizes SaltStack's external authentication system. This system requires requests to authenticate with user names and passwords or tokens; in reality not every system can or should utilize those methods of authentication.

Webhook salt-api configuration

Most systems and tools however do support utilizing a pre-shared API key for authentication. In this article we are going to use a pre-shared API key to authenticate our systems with salt-api. To do this we will add the webhook_url and webhook_disable_auth parameters.

Insert

rest_cherrypy: port: 8080 host: <your hosts ip> ssl_crt: /etc/ssl/private/cert.pem ssl_key: /etc/ssl/private/key.pem webhook_disable_auth: True webhook_url: /hook

The webhook_url configuration parameter tells salt-api to listen for requests to the specified URI. The webhook_disable_auth configuration allows you to disable the external authentication requirement for the webhook URI. At this point in our configuration there is no authentication on that webhook URI, and any request sent to that webhook URI will be posted to SaltStack's event bus.

Actioning webhook requests with Salt's Reactor system

SaltStack's event system is a internal notification system for SaltStack, it allows external processes to listen for SaltStack events and potentially action them. A few examples of events that are published to this event system are Jobs, Authentication requests by minions, as well as salt-cloud related events. A common consumer of these events is the SaltStack Reactor system, the Reactor system allows you to listen for events and perform specified actions when those events are seen.

Reactor configuration

We will be utilizing the Reactor system to listen for webhook events and execute specific commands when we see them. Reactor definitions need to be included in the master configuration, for this article we will define our configuration within the master.d directory.

# vi /etc/salt/master.d/reactor.conf

Insert

reactor: - 'salt/netapi/hook/restart': - /salt/reactor/restart.sls

The above command will listen for events that are tagged with salt/netapi/hook/restart which would mean any API requests targeting https://example.com:8080/hook/restart. When those events occur it will execute the items within /salt/reactor/restart.sls.

Defining what happens

Within the restart.sls file, we will need to define what we want to happen when the /hook/restart URI is requested.

# vi /salt/reactor/services/restart.sls Simply restarting a service

To perform our simplest use case of restarting nginx when we get a request to /hook/restart you could add the following.

Insert

restart_services: cmd.service.restart: - tgt: 'somehost' - arg: - nginx

This configuration is extremely specific and doesn't leave much chance for someone to exploit it for malicious purposes. This configuration is also extremely limited.

Restarting the specified services on the specified servers

When a salt-api's webhook URL is called the POST data being sent with that request is included in the event message. In the below configuration we will be using that POST data to allow request to the webhook URL to specify both the target servers and the service to be restarted.

Insert

{% set postdata = data.get('post', {}) %} restart_services: cmd.service.restart: - tgt: '{{ postdata.tgt }}' - arg: - {{ postdata.service }}

In the above configuration we are taking the POST data sent to the URL and assigning it to the postdata object. We can then use that object to provide the tgt and arg values. This allows the same URL and webhook call to be used to restart any service on any or all minion nodes.

Since we disabled external authentication on the webhook URL there is currently no authentication with the above configuration. This means that anyone who knows our URL could send a well formatted request and restart any service on any minion they want.

Using a secret key to authenticate

To secure our webhook URL a little better we can add a few lines to our reactor configuration that requires that requests to this reactor include a valid secret key before executing.

Insert

{% set postdata = data.get('post', {}) %} {% if postdata.secretkey == "replacethiswithsomethingbetter" %} restart_services: cmd.service.restart: - tgt: '{{ postdata.tgt }}' - arg: - {{ postdata.service }} {% endif %}

The above configuration requires that the requesting service include a POST key of secretkey and the value of that key must be replacethiswithsomethingbetter. If the requester does not, the Reactor will simply perform no steps. The secret key in this configuration should be treated like any other API key, it should be validated before performing any action and if you have more than one system/user making requests you should ensure that only the correct users have the ability to execute the commands specified in the reactor SLS file.

Restarting salt-master and Starting salt-api

Before testing our new salt-api actions we will need to restart the salt-master service and start the salt-api service.

# service salt-master restart # service salt-api start Testing our configuration with curl

Once the two above services have finished restarting we can test our configuration with the following curl command.

# curl -H "Accept: application/json" -d tgt='*' -d service="nginx" -d secretkey="replacethiswithsomethingbetter" -k https://salt-api-hostname:8080/hook/services/restart

If the configuration is correct you should expect to see a message stating success.

Example

# curl -H "Accept: application/json" -d tgt='*' -d service="nginx" -d secretkey="replacethiswithsomethingbetter" -k https://10.0.0.2:8080/hook/services/restart {"success": true}

At this point you can now integrate any third party tool or service that can send webhook requests with your SaltStack implementation. You can also use salt-api to have home grown tools initiate SaltStack executions.

Additional Considerations Other netapi modules

In this article we implemented salt-api with the cherrypy netapi module, this is one of three options that SaltStack gives you. If you are more familiar with other application servers such as wsgi than I would suggest looking through the documentation to find the appropriate module for your environment.

Restricting salt-api

While in this article we added a secret key for authentication and SSL certificates for encrypted HTTPS traffic; there are additional options that can be used to restrict the salt-api service further. If you are implementing the salt-api service into a non-trusted network, it is a good idea to use tools such as iptables to restrict which hosts / IP's are able to utilize the salt-api service.

In addition when implementing salt-api it is a good idea to think carefully as to what you allow systems to request. A simple example of this would be the cmd.run module within SaltStack. If you allowed a server to perform dynamic cmd.run requests via `salt-api, if a malicious user was able to bypass the implemented restrictions that user would then be able to theoretically run any arbitrary command on your systems.

It is always best to only allow specific commands through the API system and avoid allowing potentially dangerous commands such as cmd.run.


Originally Posted on BenCane.com: Go To Article
Categories: FLOSS Project Planets

Review: Penetration Testing with the Bash shell by Keith Makan – Packt Pub.

Wed, 2014-07-16 08:07

Penetration Testing with the Bash shell

I’ll have to say that, for some reason, I thought this book was going to be some kind of guide to using only bash itself to do penetration testing. It’s not that at all. It’s really more like doing penetration testing FROM the bash shell, or command line of you like.

Your first 2 chapters take you through a solid amount of background bash shell information. You cover topics like directory manipulation, grep, find, understanding some regular expressions, all the sorts of things you will appreciate knowing if you are going to be spending some time at the command line, or at least a good topical smattering. There is also some time spent on customization of your environment, like prompts and colorization and that sort of thing. I am not sure it’s really terribly relevant to the book topic, but still, as I mentioned before if you are going to be spending time at the command line, this is stuff that’s nice to know. I’ll admit that I got a little charge out of it because my foray into the command line was long ago on an amber phosphorous serial terminal. We’ve come a long way, Baby

The remainder of the book deals with some command line utilities and how to use them in penetration testing. At this point I really need to mention that you should be using Kali Linux or BackTrack Linux because some of the utilities they reference are not immediately available as packages in other distributions. If you are into this topic, then you probably already know that, but I just happened to be reviewing this book while using a Mint system while away from my test machine and could not immediately find a package for dnsmap.

The book gets topically heavier as you go through, which is a good thing IMHO, and by the time you are nearing the end you have covered standard bash arsenal commands like dig and nmap. You have spent some significant time with metasploit and you end up with the really technical subjects of disassembly (reverse engineering code) and debugging. Once you are through that you dive right into network monitoring, attacks and spoofs. I think the networking info should have come before the code hacking but I can also see their logic in this roadmap as well. Either way, the information is solid and sensical, it’s well written and the examples work. You are also given plenty of topical reference information should you care to continue your research, and this is something I think people will really appreciate.

To sum it up, I like the book. Again, it wasn’t what I thought it was going to be, but it surely will prove to be a valuable reference, especially combined with some of Packt’s other fine books like those on BackTrack. Buy your copy today!

Categories: FLOSS Project Planets

Implementing stack using the list functions in kernel -2

Tue, 2014-07-15 22:25
In the post "Implementing stack using the list fuctions in kernel" we saw how to create a stack entry using the in built list functions of the kernel. Some of the operations that we did while creating the stack can further be automated using some more list functions of the kernel.

We had used a variable "tail" to keep track of the last element of the stack, which was being updated in the push_stack function after every write to the stack. This is not really needed is we use the function list_last_entry.

list_last_entry(ptr, type, member)

ptr: Pointer to the head node of the list. type: type of structure of which the head is member of member: The name of the head node in the structure.

For our example, we can use list_last_entry as below.

list_last_entry(head,struct k_list,test_list); list_last_entry automatically returns the last node of the list and there is no need to keep track of the tail node.

Thus the push function can be modified as below.

int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; if(list_empty(head)) { list_add(&node->test_list,head); } else{ tail=list_last_entry(head,struct k_list,test_list); list_add(&node->test_list,tail); } return count; }

In the pop_stack function, the check for stack being empty was being performed by comparing the tail with the head pointer. This can be easily by using the function list_empty, which returns true is the list is empty

int list_empty(const struct list_head *head)

head: Pointer to the head node of the list which is being tested.

list_last_entry can also be used in the pop_stack function to read out the last element of the stack.

Thus the modified pop_stack will look as below.

int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(list_empty(head) ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=list_last_entry(head,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { list_del(&node->test_list); new_node=1; } return count; }

The complete modified code for stack being

stack_list.c

#include #include #include #include#include #include #include int len,temp,cnt,i=0,ret; char *empty="Stack Empty \n"; int emp_len,flag=1; struct k_list { struct list_head test_list; char *data; }; static struct k_list *node; struct list_head *tail,*head; struct list_head test_head; int new_node=1; char *msg; int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(list_empty(head) ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=list_last_entry(head,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { list_del(&node->test_list); new_node=1; } return count; } int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; if(list_empty(head)) { list_add(&node->test_list,head); } else{ tail=list_last_entry(head,struct k_list,test_list); list_add(&node->test_list,tail); } return count; } struct file_operations proc_fops = { read: pop_stack, write: push_stack }; void create_new_proc_entry(void) { proc_create("stack",0,NULL,&proc_fops); } int stack_init (void) { create_new_proc_entry(); head=kmalloc(sizeof(struct list_head *),GFP_KERNEL); INIT_LIST_HEAD(head); tail=head; emp_len=strlen(empty); return 0; } void stack_cleanup(void) { remove_proc_entry("stack",NULL); } MODULE_LICENSE("GPL"); module_init(stack_init); module_exit(stack_cleanup);

To compile the module use the make file.

ifneq ($(KERNELRELEASE),) obj-m := stack_list.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KERNELDIR) M=$(PWD) modules clean: $(MAKE) -C $(KERNELDIR) M=$(PWD) clean endif

Compile and insert the module into the kernel

$ make $ insmod stack_list.ko

To see the output

$ echo "1" > /proc/stack $ echo "2" > /proc/stack $ echo "3" > /proc/stack $ echo "4" > /proc/stack

Thus we have written the 1,2,3,4 as data into 4 nodes with 4 being inserted last. Now to pop data out of the stack

$ cat /proc/stack 4 $ cat /proc/stack 3 $ cat /proc/stack 2 $ cat /proc/stack 1 $ cat /proc/stack Stack Empty

Thus we can see that the proc entry is operating as a stack.
Related posts

Implementing stack using the list functions in kernel
Creating a read write proc entry in kernel versions above 3.10
Pointer to structure from its member pointer: container_of
Categories: FLOSS Project Planets

GoInkscape!

Tue, 2014-07-15 14:30

Hey, gang! I’m always on the lookout for new/newish Inkscape tutorial sites. I just recently walk across GoInkscape! Check it out.

Categories: FLOSS Project Planets

Why The Kallithea Project Exists

Tue, 2014-07-15 11:45

[ This is a version of an essay that I originally published on Conservancy's blog ].

Eleven days ago, Conservancy announced Kallithea. Kallithea is a GPLv3'd system for hosting and managing Mercurial and Git repositories on one's own servers. As Conservancy mentioned in its announcement, Kallithea is indeed based on code released under GPLv3 by RhodeCode GmbH. Below, I describe why I was willing to participate in helping Conservancy become a non-profit home to an obvious fork (as this is the first time Conservancy ever welcomed a fork as a member project).

The primary impetus for Kallithea is that more recent versions of RhodeCode GmbH's codebase contain a very unorthodox and ambiguous license statement, which states:

(1) The Python code and integrated HTML are licensed under the GPLv3 license as is RhodeCode itself.
(2) All other parts of the RhodeCode including, but not limited to the CSS code, images, and design are licensed according to the license purchased.

Simply put, this licensing scheme is — either (a) a GPL violation, (b) an unclear license permission statement under the GPL which leaves the redistributor feeling unclear about their rights, or (c) both.

When members of the Mercurial community first brought this license to my attention about ten months ago, my first focus was to form a formal opinion regarding (a). Of course, I did form such an opinion, and you can probably guess what that is. However, I realized a few weeks later that this analysis really didn't matter in this case; the situation called for a more innovative solution.

Indeed, I recalled at that time the disputes between AT&T and University of California at Berkeley over BSD. In that case, while nearly all of the BSD code was adjudicated as freely licensed, the dispute itself was painful for the BSD community. BSD's development slowed nearly to a standstill for years while the legal disagreement was resolved. Court action — even if you're in the right — isn't always the fastest nor best way to push forward an important Free Software project.

In the case of RhodeCode's releases, there was an obvious and more productive solution. Namely, the 1.7.2 release of RhodeCode's codebase, written primarily by Marcin Kuzminski was fully released under GPLv3-only, and provided an excellent starting point to begin a GPLv3'd fork. Furthermore, some of the improved code in the 2.2.5 era of RhodeCode's codebase were explicitly licensed under GPLv3 by RhodeCode GmbH itself. Finally, many volunteers produced patches for all versions of RhodeCode's codebase and released those patches under GPLv3, too. Thus, there was already a burgeoning GPLv3-friendly community yearning to begin.

My primary contribution, therefore, was to lead the process of vetting and verifying a completely indisputable GPLv3'd version of the codebase. This was extensive and time consuming work; I personally spent over 100 hours to reach this point, and I suspect many Kallithea volunteers have already spent that much and more. Ironically, the most complex part of the work so far was verifying and organizing the licensing situation regarding third-party Javascript (released under a myriad of various licenses). You can see the details of that work by reading the revision history of Kallithea (or, you can read an overview in Kallithea's LICENSE file).

Like with any Free Software codebase fork, acrimony and disagreement led to Kallithea's creation. However, as the person who made most of the early changesets for Kallithea, I want to thank RhodeCode GmbH for explicitly releasing some of their work under GPLv3. Even as I hereby reiterate publicly my previously private request that RhodeCode GmbH correct the parts of their licensing scheme that are (at best) problematic, and (at worst) GPL-violating, I also point out this simple fact to those who have been heavily criticizing and admonishing RhodeCode GmbH: the situation could be much worse! RhodeCode could have simply never released any of their code under the GPLv3 in the first place. After all, there are many well-known code hosting sites that refuse to release any of their code (or release only a pittance of small components). By contrast, the GPLv3'd RhodeCode software was nearly a working system that helped bootstrap the Kallithea community. I'm grateful for that, and I welcome RhodeCode developers to contribute to Kallithea under GPLv3. I note, of course, that RhodeCode developers sadly can't incorporate any of our improvements in their codebase, due to their problematic license. However, I extend again my offer (also made privately last year) to work with RhodeCode GmbH to correct its licensing problems.

Categories: FLOSS Project Planets

Finding trignometric inverse in gcalctool in linux

Mon, 2014-07-14 07:09
Most linux distros come with gcalctool as the default calculator. It is not obvious on first look as how to find the inverse of trignometric terms in gcalctool. Here is how we can do it.

This is the default look of the gcalctool.



Let us say we want to find arcsin(0.9)

Type sin or press the sin button



Now press the up arrow, which is the second button from left in the first row and then type -1.



Now press the up arrow again to come out of superscript mode.

Now enter the value 0.9 and press enter.



And we have the angle in degrees for arcsin(0.9)


Categories: FLOSS Project Planets

Implementing stack using the list functions in kernel

Mon, 2014-07-14 05:06
In the posts "Creating a linked list in linux kernel" and "Using list_del to delete a node" saw how to use the in built functions of a kernel to build a linked list. One of the main applications of linked list could be the implementation of stack. In this post, let us make use of the list functions available in the kernel to implement a stack.

A stack, as we know, is a data structure that stores elements in the order of first in last out. So when ever we write or push data into the stack, the data needs to be added as a node at the end of the list and when we read data from the stack we need to read data from the last node of the list and then delete the corresponding node. To implement a stack using a list, we need to keep track of the last element that gets inserted into the list, because when ever we need to read from the stack we need to return the data stored in the last node.

To implement the stack operation using list functions, we will make use of the function

void list_add(struct list_head *new, struct list_head *head)

Where
new: The new node to be added to the list
head: Is the node after which the new node has to be added.
If we have to implement a linked list from scratch we would have had to take care of all the pointer manipulations when inserting data into the list or removing data from the list. But as we have the required help, as built in functions, all we need to do is call the right function with the right arguments.

To show the operation of a stack we will make use of a proc entry. We will create proc entry named as "stack" and make it behave as a stack. We will use the following structure as the node

struct k_list { struct list_head test_list; char *data; }

test_list: Will be used to create the linked list using the built in functions of kernel
data: To hold the data of the list.

The first thing we will have to do, to implement a stack using proc entry is create a linked list and create a proc entry in the init function.

int stack_init (void) { create_new_proc_entry(); INIT_LIST_HEAD(&test_head); tail=&test_head; emp_len=strlen(empty); return 0; }

create_new_proc_entry : Function to create a new proc entry.

void create_new_proc_entry(void) { proc_create("stack",0,NULL,&proc_fops); }

INIT_LIST_HEAD: Initialize the linked list with the head node as test_head.

tail=&test_head; : With no data in the stack the tail and the head point to the same node.

emp_len=strlen(empty): "empty" is the message to the printed when the stack is empty and strlen returns the length of the string.

Once the initialization is done we need to associate the functions for push, writing data to stack and pop, reading data from stack.

struct file_operations proc_fops = { read: pop_stack, write: push_stack };

Now we need to implement the functions pop_stack and push and stack.

push_stack:

int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add(&node->test_list,tail); cnt++; tail=&(node->test_list); return count; }

msg=kmalloc(10*sizeof(char),GFP_KERNEL): allocate memory for the data to be received from the user space.
temp=copy_from_user(msg,buf,count): copy the data from user space into the memory allocated
node=kmalloc(sizeof(struct k_list *),GFP_KERNEL): Allocate a new node.
node->data=msg: Assign the data received to the node.
list_add(&node->test_list,tail): Add the new node to the list. Note that we are passing the node after which the new node has to be added as tail. This makes sure that the list behaves as a stack.
cnt++: Increase the count of nodes that are present in the list.
tail=&(node->test_list): Change the node to point to the new node of the list. Note that we are assigning the address of test_list to tail and not the address of the node.

pop_stack:

int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(cnt == 0 ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; }

In the pop operation we pop the last element inserted into the stack and then delete the node. According to the push_stack function, the variable tail points to the last element inserted into the stack.

According the standard practice followed in the the kernel, the read operation has to return the number of bytes of data transfered to the user space and user space functions continue to read as long as they do not get a 0 as return value indicating end of the read operation.

To make the proc entry work as a stack, we need to make sure that every read operation return only one node. To ensure this, in the above code we have used a variables new_node and flag, which after returning the data of one node or empty stack, respectively, makes the return value to 0, which ensures the read is terminated.

While popping data out of stack, we need to first check if the stack has data in it. This is done by comparing tail with test_head. If test is pointing to test_head it means the stack is empty and we return the string "Stack Empty". This is implemented by following piece of code

if(tail == &test_head ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; }

On the other hand if stack has data in it.

if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; }

node=container_of(tail,struct k_list ,test_list): Get the pointer to the node which has the tail, last node of the list, as its member.
msg=node->data : Assign the data in the node to msg
ret=strlen(msg): Get the length of data to be transfered.
new_node=0: set new_node to 0, to indicate one node has been read. if(count>ret) { count=ret; } ret=ret-count;

Set count value to sting length, if count is larger than string length and reduce the variable ret by count value.
ret become 0 when ever the count valur is greater than ret.
temp=copy_to_user(buf,msg, count): Copy msg to user space.
if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; }

Once all the data from the node has been transferred count becomes 0, not we need to change the tail to point to the node previous to the current node.

tail=((node->test_list).prev): Assign to tail the address of test_list of previous node.
list_del(&node->test_list): delete the node from the list.
new_node=1: Set new_node to 1 for the read of next node.

The full code for the kernel module looks as below.

stack_list.c

int len,temp,cnt,i=0,ret; char *empty="Stack Empty \n"; int emp_len,flag=1; struct k_list { struct list_head test_list; char *data; }; static struct k_list *node; struct list_head *tail; struct list_head test_head; int new_node=1; char *msg; int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(tail == &test_head ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; } int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add(&node->test_list,tail); cnt++; tail=&(node->test_list); return count; } struct file_operations proc_fops = { read: pop_stack, write: push_stack }; void create_new_proc_entry(void) { proc_create("stack",0,NULL,&proc_fops); } int stack_init (void) { create_new_proc_entry(); INIT_LIST_HEAD(&test_head); tail=&test_head; emp_len=strlen(empty); return 0; } void stack_cleanup(void) { remove_proc_entry("stack",NULL); } MODULE_LICENSE("GPL"); module_init(stack_init); module_exit(stack_cleanup);

To compile the module use the make file.

ifneq ($(KERNELRELEASE),) obj-m := stack_list.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KERNELDIR) M=$(PWD) modules clean: $(MAKE) -C $(KERNELDIR) M=$(PWD) clean endif

Compile and insert the module into the kernel

$ make $ insmod stack_list.ko

To see the output

$ echo "1" > /proc/stack $ echo "2" > /proc/stack $ echo "3" > /proc/stack $ echo "4" > /proc/stack

Thus we have written the 1,2,3,4 as data into 4 nodes with 4 being inserted last. Now to pop data out of the stack

$ cat /proc/stack 4 $ cat /proc/stack 3 $ cat /proc/stack 2 $ cat /proc/stack 1 $ cat /proc/stack Stack Empty

Thus we can see that the proc entry is operating as a stack.

Related posts

Implementing stack using the list functions in kernel-2
Creating a read write proc entry in kernel versions above 3.10
Pointer to structure from its member pointer: container_of
Categories: FLOSS Project Planets

Porting UEFI to BeagleBoneBlack: Technical Details I

Sun, 2014-07-13 21:13

I’m adding a BeagleBoneBlack port to the Tianocore/EDK2 UEFI implementation. This post details the implementation specifics of the port so far.

About the hardware:
BeagleBoneBlack is a low-cost embedded board that boasts an ARM AM335x SoC. It supports Linux, with Android, Ubuntu and Ångström ports already available. It comes pre-loaded with the MLO and U-Boot images on its eMMC which can be flashed with custom binaries. Bootup can also be done from a partitioned sdcard or by transferring binaries over UART (ymodem) or USB (TFTP). The boot flow is presented here:

The Tianocore Project / Build System

The EDK2 framework provides an implementation of the UEFI specifications. It’s got its own customizable pythonic build system that works based on the config details provided through build meta-files. The build setup is described in this document.

(TL;DR: the build tool parses INF, DSC and DEC files for each package that describe its dependencies, exports and the back-end library implementations it shall use. This makes EDK2 highly modular to support all kinds of hardware platforms. It generates Firmware Volume images for each section in the Flash Description File, which are put into a Flash Description binary with addressing as specified in the FDF. The DSC specifies which library in code should point to which implementation, and the INF keeps a record of a module’s exports and imports. If these don’t match, the build simply fails.)

Implementation

I started out with an attempt to write a bare-metal binary that prints over some letters to UART to get a hang of how low-level development works. Here‘s a great guide to the basics for bare-metal on ARM. All the required hardware has to be initialized in the binary before use, and running C requires an execution environment set up that provides stacks and handles placement of segments in memory. Since U-Boot already handles that in its SPL phase, I wrote a standalone that could be called by U-Boot instead.

The BeagleBoneBlackPkg is derived from the ArmPlatformPkg. I began with echoing the “second stage” steps mentioned here – implement the libraries available and perform platform specific tasks – as I intended to take over boot from U-Boot/MLO.  This also eased me from having to do the IRQ and memory initializations.

I’m using the PrePeiCore (and not Sec) module’s entry point to load the image. It builds the PPIs necessary to pass control over to PEI and calls PeiMain.

Running the FD:  The build generates an .Fd image that will be used to boot the device. The MLO binary I’m using is built to look for and launch a file named ‘u-boot.img’ on the MMC (there’s a CONFIG_ macro to change this somewhere in u-boot), so I just rename the FD to u-boot.img before flashing it.


Categories: FLOSS Project Planets

Download CentOS 7 ISO / DVD / x86_64 / i386 / 32-Bit / 64-Bit

Sun, 2014-07-13 02:09
Hello, Hello and welcome to the first CentOS-7 release. CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by Red Hat1. CentOS conforms fully with Red Hat’s redistribution policy and aims to have full functional compatibility with the upstream product. CentOS mainly changes packages to remove Red Hat’s branding and […]
Categories: FLOSS Project Planets

Android Version and Device Stats for LQ Native App II

Tue, 2014-07-08 16:23

Now that the native LQ android app is in the 5-10,000 download range, I thought I’d post an update to this previous post on the topic of Android version and device stats. See this post if you’re interested in browser and OS stats for the main LQ site.

Platform Version Android 4.4 29.54% Android 4.1 20.42% Android 4.0.3-4.0.4 13.59% Android 4.2 12.49% Android 2.3.3-2.3.7 11.70% Android 4.3 9.27% Android 2.2 1.96%

 

Device Google Nexus 7 (grouper) 6.13% Samsung Galaxy S3 (m0) 3.53% Google Nexus 5 2.75% Samsung Galaxy S2 2.28% Samsung Galaxy S3 2.20% Google Nexus 7 (flo) 2.12% Samsung Galaxy S4 1.81% Google Nexus 4 1.73% Samsung Galaxy Tab2 1.49%

So, how has Android fragmentation changed since my original post in February of 2012? At first blush it may appear that it’s actually more fragmented from a device version perspective. Previously, the top two versions accounted for over 70% of all installs, while now that number is just 50%. That’s misleading though, as almost 90% of all installs are now on a 4.x variant. This clustering around a much more polished version of Android, along with the fact that Google has broken so much functionality out into Google Play Services, means that from a developers perspective things are significantly better than they were during the time-frame of my previous post. I will admit I’m surprised by the age of the top devices, but they may be specific to the LQ crowd (and it’s no surprise to me to see the Nexus 5 as the second most popular phone).

–jeremy


Categories: FLOSS Project Planets

XPath and namespace

Mon, 2014-07-07 10:30

When your XSLT/Xpath search is not giving the desired results always check the namespace of the element you are using.

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 19: Fedora Murder Trial

Thu, 2014-06-26 09:19

From the Bad Voltage site:

The whole team return (remarkably) to speak, weirdly, only about things beginning with the letter F. Myself, Bryan Lunduke, Jono Bacon and Stuart Langridge present Bad Voltage, with the following “F-ing” things:

  • Fire Phone: Amazon release a phone, and we decide whether it’s exciting or execrable
  • Firefox OS: the Mozilla project’s phone OS, reviewed by Stuart and discussed by everybody
  • Freshmeat: the late-90s web store of software for Linux has finally closed its doors. Reminiscence, combined with some thoughts on how and why the world has moved on
  • Fedora Project Leader: Matthew Miller, the newly-appointed leader of the Fedora Linux project, speaks about the direction that distribution is planning, working with large communities, and whether his job should be decided by a Thunderdome-style trial by combat
  • err… our Fabulous community: we catch up with what’s going on

Listen to: 1×19: Fedora Murder Trial

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets