LinuxPlanet

Syndicate content
By Linux Geeks, For Linux Geeks.
Updated: 1 day 6 hours ago

Finding trignometric inverse in gcalctool in linux

Mon, 2014-07-14 07:09
Most linux distros come with gcalctool as the default calculator. It is not obvious on first look as how to find the inverse of trignometric terms in gcalctool. Here is how we can do it.

This is the default look of the gcalctool.



Let us say we want to find arcsin(0.9)

Type sin or press the sin button



Now press the up arrow, which is the second button from left in the first row and then type -1.



Now press the up arrow again to come out of superscript mode.

Now enter the value 0.9 and press enter.



And we have the angle in degrees for arcsin(0.9)


Categories: FLOSS Project Planets

Implementing stack using the list functions in kernel

Mon, 2014-07-14 05:06
In the posts "Creating a linked list in linux kernel" and "Using list_del to delete a node" saw how to use the in built functions of a kernel to build a linked list. One of the main applications of linked list could be the implementation of stack. In this post, let us make use of the list functions available in the kernel to implement a stack.

A stack, as we know, is a data structure that stores elements in the order of first in last out. So when ever we write or push data into the stack, the data needs to be added as a node at the end of the list and when we read data from the stack we need to read data from the last node of the list and then delete the corresponding node. To implement a stack using a list, we need to keep track of the last element that gets inserted into the list, because when ever we need to read from the stack we need to return the data stored in the last node.

To implement the stack operation using list functions, we will make use of the function

void list_add(struct list_head *new, struct list_head *head)

Where
new: The new node to be added to the list
head: Is the node after which the new node has to be added.
If we have to implement a linked list from scratch we would have had to take care of all the pointer manipulations when inserting data into the list or removing data from the list. But as we have the required help, as built in functions, all we need to do is call the right function with the right arguments.

To show the operation of a stack we will make use of a proc entry. We will create proc entry named as "stack" and make it behave as a stack. We will use the following structure as the node

struct k_list { struct list_head test_list; char *data; }

test_list: Will be used to create the linked list using the built in functions of kernel
data: To hold the data of the list.

The first thing we will have to do, to implement a stack using proc entry is create a linked list and create a proc entry in the init function.

int stack_init (void) { create_new_proc_entry(); INIT_LIST_HEAD(&test_head); tail=&test_head; emp_len=strlen(empty); return 0; }

create_new_proc_entry : Function to create a new proc entry.

void create_new_proc_entry(void) { proc_create("stack",0,NULL,&proc_fops); }

INIT_LIST_HEAD: Initialize the linked list with the head node as test_head.

tail=&test_head; : With no data in the stack the tail and the head point to the same node.

emp_len=strlen(empty): "empty" is the message to the printed when the stack is empty and strlen returns the length of the string.

Once the initialization is done we need to associate the functions for push, writing data to stack and pop, reading data from stack.

struct file_operations proc_fops = { read: pop_stack, write: push_stack };

Now we need to implement the functions pop_stack and push and stack.

push_stack:

int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add(&node->test_list,tail); cnt++; tail=&(node->test_list); return count; }

msg=kmalloc(10*sizeof(char),GFP_KERNEL): allocate memory for the data to be received from the user space.
temp=copy_from_user(msg,buf,count): copy the data from user space into the memory allocated
node=kmalloc(sizeof(struct k_list *),GFP_KERNEL): Allocate a new node.
node->data=msg: Assign the data received to the node.
list_add(&node->test_list,tail): Add the new node to the list. Note that we are passing the node after which the new node has to be added as tail. This makes sure that the list behaves as a stack.
cnt++: Increase the count of nodes that are present in the list.
tail=&(node->test_list): Change the node to point to the new node of the list. Note that we are assigning the address of test_list to tail and not the address of the node.

pop_stack:

int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(cnt == 0 ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; }

In the pop operation we pop the last element inserted into the stack and then delete the node. According to the push_stack function, the variable tail points to the last element inserted into the stack.

According the standard practice followed in the the kernel, the read operation has to return the number of bytes of data transfered to the user space and user space functions continue to read as long as they do not get a 0 as return value indicating end of the read operation.

To make the proc entry work as a stack, we need to make sure that every read operation return only one node. To ensure this, in the above code we have used a variables new_node and flag, which after returning the data of one node or empty stack, respectively, makes the return value to 0, which ensures the read is terminated.

While popping data out of stack, we need to first check if the stack has data in it. This is done by comparing tail with test_head. If test is pointing to test_head it means the stack is empty and we return the string "Stack Empty". This is implemented by following piece of code

if(tail == &test_head ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; }

On the other hand if stack has data in it.

if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; }

node=container_of(tail,struct k_list ,test_list): Get the pointer to the node which has the tail, last node of the list, as its member.
msg=node->data : Assign the data in the node to msg
ret=strlen(msg): Get the length of data to be transfered.
new_node=0: set new_node to 0, to indicate one node has been read. if(count>ret) { count=ret; } ret=ret-count;

Set count value to sting length, if count is larger than string length and reduce the variable ret by count value.
ret become 0 when ever the count valur is greater than ret.
temp=copy_to_user(buf,msg, count): Copy msg to user space.
if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; }

Once all the data from the node has been transferred count becomes 0, not we need to change the tail to point to the node previous to the current node.

tail=((node->test_list).prev): Assign to tail the address of test_list of previous node.
list_del(&node->test_list): delete the node from the list.
new_node=1: Set new_node to 1 for the read of next node.

The full code for the kernel module looks as below.

stack_list.c

int len,temp,cnt,i=0,ret; char *empty="Stack Empty \n"; int emp_len,flag=1; struct k_list { struct list_head test_list; char *data; }; static struct k_list *node; struct list_head *tail; struct list_head test_head; int new_node=1; char *msg; int pop_stack(struct file *filp,char *buf,size_t count,loff_t *offp ) { if(tail == &test_head ){ msg=empty; if(flag==1) { ret=emp_len; flag=0; } else if(flag==0){ ret=0; flag=1; } temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\nStack empty\n"); return ret; } if(new_node == 1) { node=container_of(tail,struct k_list ,test_list); msg=node->data; ret=strlen(msg); new_node=0; } if(count>ret) { count=ret; } ret=ret-count; temp=copy_to_user(buf,msg, count); printk(KERN_INFO "\n data = %s \n" ,msg); if(count==0) { tail=((node->test_list).prev); list_del(&node->test_list); cnt--; new_node=1; } return count; } int push_stack(struct file *filp,const char *buf,size_t count,loff_t *offp) { msg=kmalloc(10*sizeof(char),GFP_KERNEL); temp=copy_from_user(msg,buf,count); node=kmalloc(sizeof(struct k_list *),GFP_KERNEL); node->data=msg; list_add(&node->test_list,tail); cnt++; tail=&(node->test_list); return count; } struct file_operations proc_fops = { read: pop_stack, write: push_stack }; void create_new_proc_entry(void) { proc_create("stack",0,NULL,&proc_fops); } int stack_init (void) { create_new_proc_entry(); INIT_LIST_HEAD(&test_head); tail=&test_head; emp_len=strlen(empty); return 0; } void stack_cleanup(void) { remove_proc_entry("stack",NULL); } MODULE_LICENSE("GPL"); module_init(stack_init); module_exit(stack_cleanup);

To compile the module use the make file.

ifneq ($(KERNELRELEASE),) obj-m := stack_list.o else KERNELDIR ?= /lib/modules/$(shell uname -r)/build PWD := $(shell pwd) default: $(MAKE) -C $(KERNELDIR) M=$(PWD) modules clean: $(MAKE) -C $(KERNELDIR) M=$(PWD) clean endif

Compile and insert the module into the kernel

$ make $ insmod stack_list.ko

To see the output

$ echo "1" > /proc/stack $ echo "2" > /proc/stack $ echo "3" > /proc/stack $ echo "4" > /proc/stack

Thus we have written the 1,2,3,4 as data into 4 nodes with 4 being inserted last. Now to pop data out of the stack

$ cat /proc/stack 4 $ cat /proc/stack 3 $ cat /proc/stack 2 $ cat /proc/stack 1 $ cat /proc/stack Stack Empty

Thus we can see that the proc entry is operating as a stack.

Related posts

Implementing stack using the list functions in kernel-2
Creating a read write proc entry in kernel versions above 3.10
Pointer to structure from its member pointer: container_of
Categories: FLOSS Project Planets

Porting UEFI to BeagleBoneBlack: Technical Details I

Sun, 2014-07-13 21:13

I’m adding a BeagleBoneBlack port to the Tianocore/EDK2 UEFI implementation. This post details the implementation specifics of the port so far.

About the hardware:
BeagleBoneBlack is a low-cost embedded board that boasts an ARM AM335x SoC. It supports Linux, with Android, Ubuntu and Ångström ports already available. It comes pre-loaded with the MLO and U-Boot images on its eMMC which can be flashed with custom binaries. Bootup can also be done from a partitioned sdcard or by transferring binaries over UART (ymodem) or USB (TFTP). The boot flow is presented here:

The Tianocore Project / Build System

The EDK2 framework provides an implementation of the UEFI specifications. It’s got its own customizable pythonic build system that works based on the config details provided through build meta-files. The build setup is described in this document.

(TL;DR: the build tool parses INF, DSC and DEC files for each package that describe its dependencies, exports and the back-end library implementations it shall use. This makes EDK2 highly modular to support all kinds of hardware platforms. It generates Firmware Volume images for each section in the Flash Description File, which are put into a Flash Description binary with addressing as specified in the FDF. The DSC specifies which library in code should point to which implementation, and the INF keeps a record of a module’s exports and imports. If these don’t match, the build simply fails.)

Implementation

I started out with an attempt to write a bare-metal binary that prints over some letters to UART to get a hang of how low-level development works. Here‘s a great guide to the basics for bare-metal on ARM. All the required hardware has to be initialized in the binary before use, and running C requires an execution environment set up that provides stacks and handles placement of segments in memory. Since U-Boot already handles that in its SPL phase, I wrote a standalone that could be called by U-Boot instead.

The BeagleBoneBlackPkg is derived from the ArmPlatformPkg. I began with echoing the “second stage” steps mentioned here – implement the libraries available and perform platform specific tasks – as I intended to take over boot from U-Boot/MLO.  This also eased me from having to do the IRQ and memory initializations.

I’m using the PrePeiCore (and not Sec) module’s entry point to load the image. It builds the PPIs necessary to pass control over to PEI and calls PeiMain.

Running the FD:  The build generates an .Fd image that will be used to boot the device. The MLO binary I’m using is built to look for and launch a file named ‘u-boot.img’ on the MMC (there’s a CONFIG_ macro to change this somewhere in u-boot), so I just rename the FD to u-boot.img before flashing it.


Categories: FLOSS Project Planets

Download CentOS 7 ISO / DVD / x86_64 / i386 / 32-Bit / 64-Bit

Sun, 2014-07-13 02:09
Hello, Hello and welcome to the first CentOS-7 release. CentOS is an Enterprise-class Linux Distribution derived from sources freely provided to the public by Red Hat1. CentOS conforms fully with Red Hat’s redistribution policy and aims to have full functional compatibility with the upstream product. CentOS mainly changes packages to remove Red Hat’s branding and […]
Categories: FLOSS Project Planets

Android Version and Device Stats for LQ Native App II

Tue, 2014-07-08 16:23

Now that the native LQ android app is in the 5-10,000 download range, I thought I’d post an update to this previous post on the topic of Android version and device stats. See this post if you’re interested in browser and OS stats for the main LQ site.

Platform Version Android 4.4 29.54% Android 4.1 20.42% Android 4.0.3-4.0.4 13.59% Android 4.2 12.49% Android 2.3.3-2.3.7 11.70% Android 4.3 9.27% Android 2.2 1.96%

 

Device Google Nexus 7 (grouper) 6.13% Samsung Galaxy S3 (m0) 3.53% Google Nexus 5 2.75% Samsung Galaxy S2 2.28% Samsung Galaxy S3 2.20% Google Nexus 7 (flo) 2.12% Samsung Galaxy S4 1.81% Google Nexus 4 1.73% Samsung Galaxy Tab2 1.49%

So, how has Android fragmentation changed since my original post in February of 2012? At first blush it may appear that it’s actually more fragmented from a device version perspective. Previously, the top two versions accounted for over 70% of all installs, while now that number is just 50%. That’s misleading though, as almost 90% of all installs are now on a 4.x variant. This clustering around a much more polished version of Android, along with the fact that Google has broken so much functionality out into Google Play Services, means that from a developers perspective things are significantly better than they were during the time-frame of my previous post. I will admit I’m surprised by the age of the top devices, but they may be specific to the LQ crowd (and it’s no surprise to me to see the Nexus 5 as the second most popular phone).

–jeremy


Categories: FLOSS Project Planets

XPath and namespace

Mon, 2014-07-07 10:30

When your XSLT/Xpath search is not giving the desired results always check the namespace of the element you are using.

Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 19: Fedora Murder Trial

Thu, 2014-06-26 09:19

From the Bad Voltage site:

The whole team return (remarkably) to speak, weirdly, only about things beginning with the letter F. Myself, Bryan Lunduke, Jono Bacon and Stuart Langridge present Bad Voltage, with the following “F-ing” things:

  • Fire Phone: Amazon release a phone, and we decide whether it’s exciting or execrable
  • Firefox OS: the Mozilla project’s phone OS, reviewed by Stuart and discussed by everybody
  • Freshmeat: the late-90s web store of software for Linux has finally closed its doors. Reminiscence, combined with some thoughts on how and why the world has moved on
  • Fedora Project Leader: Matthew Miller, the newly-appointed leader of the Fedora Linux project, speaks about the direction that distribution is planning, working with large communities, and whether his job should be decided by a Thunderdome-style trial by combat
  • err… our Fabulous community: we catch up with what’s going on

Listen to: 1×19: Fedora Murder Trial

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets

Host your private cloud easily using ownCloud

Thu, 2014-06-26 05:47

Would you like to have the easy of use of cloud storage and file syncing but without the trust issues or costs that come with using public cloud services? Do you like Dropbox but hesitate to use it? What you might be looking for is OwnCloud, the open source software you can run to host your own private cloud storage.


The development of ownCloud was started by Frank Karlitschek in 2010 and over the years it has evolved into the most popular open source cloud storage and file syncing software suite in the world. The code is licensed using AGPLv3 and the open source project is hosted at owncloud.org. Karlitschek also started the company OwnCloud Inc. which provides an Enterprise Edition that is useful for larger entities that run big OwnCloud installations with thousands of users.

The main component of OwnCloud is the server software written in modern PHP which uses a database (e.g. MariaDB) and files on the disk as backend. The server software provides both, a browser based UI and a webdav interface for syncing clients. There are syncing clients for Linux desktops, Windows, Mac OSX, Android and IOS, so with OwnCloud you can have your files synced across all your devices.

Open source and open standards

One of the really nice things about OwnCloud is that the architecture is built around open standards and you can connect to OwnCloud using any WebDAV capable software. It is likely that you can make your mobile phone sync contacts and calendar events using the iCard and iCal standards directly with an OwnCloud server without installing any new software. If you value your privacy a lot, then you could build a pretty good setup using an OwnCloud server and a Jolla, a phone which does not by default sync your data to any external server.

Open standards are also prominent when viewing the OwnCloud settings, that allow so many ways to integrate with various file storage backends and user authentication (SMB/CIFS, LDAP/AD etc).

Using WebDAV also has its drawbacks, like the need to transfer complete files instead of just the changed parts like other protocols like rsync do. This issue is very popular at the syncing client Github site and hopefully it will be addressed in some way eventually. There are also some other small annoyances, but on the other hand it has been amazing to see how much ownCloud has developed and gained popularity during the last 1-2 years. Today OwnCloud released the version 7 beta bringing once again a bunch of new features and usability refinements of the old features.

And when it comes to features, OwnCloud has an incredible amount of them. Besides the basic file uploading, downloading and sharing options there are loads of additional features that are not found in other similar software, such as the ability to create groups and define fine grained access rules and link settings to files shared from the owner to others. Users can also view old revisions of files and recover them in case of accidentally deleting some file. There is a good search feature that also looks inside the contents of the files. The search can be written of spoken thanks to voice detection.

In the web user interface you can also edit some files directly inline in the browser. In fact ownCloud is one of the biggest drivers in the development of  WebODF, the browser based OpenDocument editor. If OwnCloud finds pictures among the files it hosts, it will automatically also show them in the gallery app.

Example ownCloud browser interface

OwnCloud apps

Wait, there’s even more! Aside the file syncing and storage features, OwnCloud also hosts contacts and calendars. By activating the so called OwnCloud apps, users can enable even more features and applications that utilise the data stored in OwnCloud and that are used via the browser. There are multiple built-in apps ready and more apps can be installed from the app directory at apps.owncloud.com. Popular integrations are for example using the webmail app RoundCube inside ownCloud. Some companies have developed their own OwnCloud apps and built a complete intranet around ownCloud.

Benefits of private cloud: security and costs

Deploying OwnCloud or any other software to build your self-hosted private cloud solution requires of course a bigger initial investment compared to the public cloud offerings where everything is hosted for you and you only need to setup the client side. One might wonder what the benefits of a on-premises solution are?

The most prominent benefit is increased security and privacy. Since the revelations in 2013 it has been a well known fact that even respected western governments like the US force their own companies to give them full access to the data they host. Little has been published about the criminal acts that have been stopped by spying activities, so we can only conclude that this huge investment in online spying must be justified by economical gains in traditional industrial espionage. So it might be very relevant for a company to secure their data in a private cloud instead of a public cloud. On the other hand, most companies use Windows desktops to access all their data, so they will leak their data from the client side anyway. Also, most companies will not be able to hire the level of security experts big cloud companies provide, so their installations are probably not as well secured as bigger sites. On the other hand, bigger sites are targets for a lot of active intrusions all the time, while small private sites are less likely to be at least randomly targetted. Security is hard to do right, but at least it is good that an option like ownCloud is available.

Another but less prominent benefit is cost. Cloud storage seems to be easy and cheap, but it is actually still many times more expensive than what the underlying hardware and infrastructure is. As the hardware market is very competitive, anybody can buy many terabytes of cheap hard drives and set up an OwnCloud server big enough to host all the company data for the price of three months of rented 1 TB storage. Many companies also pay extra if they have a lot of traffic in their internet uplink. Hosting the terabyte scale storage on-premises will be a much cheaper solution, and accessing files from a locally hosted server will surely also be much faster than over a congested uplink.

How to get started using OwnCloud?

Anybody with access to a Linux server can simply head to owncloud.org, download it and start using it straight away. From a company perspective it might be good to be in contact with one of the official OwnCloud partners or some other Linux support company, like Seravo. OwnCloud is certainly worth keeping an eye on for any CTO who cares about the ability to control your own data.

Presentation in Finnish

The slides in Finnish presented at the Seravo Salad event on 2014-05-22 are also available:


Categories: FLOSS Project Planets

UEFI over BeagleBone Black: Notes

Wed, 2014-06-25 12:53

The Tianocore/EDK2 project provides an opensource implementation of UEFI specification. It has its own Python-scripted build system that supports configuring the build parameters on the go using build metadata files [http://tianocore.sourceforge.net/wiki/Build_Description_Files]. These files decide which library instances are required for a package; which instance implementation is to be used, what interfaces it exports, compiler specifications for the package and the generated image’s flash layout.

I am working to bring up UEFI support for a BeagleBone Black. Currently, I am using u-boot’s SPL to call the UEFI image (by placing generated .Fd on an MMC as “u-boot.img”), which, in turn, would provide the UEFI console and kernel loading functionality.

Since SPL does memory, stack and irq initialization, the SEC/PEI phases have little work. As per the BBB SoC (an AM335x), all multicore and AArch64 code can be safely removed from the package. UART being similar to the 16550 module can be written to by implementing SerialPortLib accordingly. Console services can be made available only after the EFI_BOOT_SERVICES table has been populated, which requires DXE phase completion.


Categories: FLOSS Project Planets

LinuxQuestions.org Turns Fourteen

Wed, 2014-06-25 11:40

I’m extremely proud to announce that exactly fourteen years ago today I made my very first post at LinuxQuestions.org. As has become tradition, here’s a quick post looking back on the past year and ahead to the next. 5,169,549 posts and 532,989 members (899,500 members have actually registered, but we have a very active pruning policy for members who have never posted) does not even begin to tell the story. As I’ve said previously, the community that has not only grown but flourished at LQ is both astounding and humbling. I’d like to once again thank each and every LQ member for their participation and feedback. I’d also like to thank the mod team, whose level-headed decisions and dedication have been a cornerstone of the site’s success. As part of our birthday celebration, we’ll be giving away Contributing Member updates, and even some LQ Merchandise. Visit this thread for more details.

This year has been another year of solid growth, both for LQ and The Questions Network. While we once again delayed the code update that we had planned for LQ, both ChromeOSQuestions.org and AndroidQuestions.org are running the latest platform. We have a couple items to work out, but LQ should be moving to the the new platform some time this year. Once that happens, we have some exciting new features and functionality we think you’ll enjoy. If you think there is anything we can do to improve, don’t hesitate to let us know.

–jeremy


Categories: FLOSS Project Planets

No such pipe, or this pipe has been deleted

Tue, 2014-06-24 14:19
This data comes from pipes.yahoo.com but the Pipe does not exist or has been deleted.
Categories: FLOSS Project Planets

Creating checkerboard image using gimp

Tue, 2014-06-24 00:46
Here is how we can create a checker board background or an image of checker board using gimp. Launch gimp and create a new image by selecting file->new.



Select the size of the image that is reqiured. In this post we will use the sizw 640X480.



Once the new blank image is created, select the menu Tools->GEGL operation



In the pull down menu for operations select the option checkerborard.



The follwing menu should appear.



Width: Width of the boxes

Height: Height of the boxes

X offset:Defines from where does the boxes start from at the left margin

Y offset: Defines from whese does the boxes start from at the bottom.

Color,Other color: These can be used to set the colors for the boxes

Click on OK to create a checkerboard image, which we can export as an image or use it as a background for other layers.



Final image:




Categories: FLOSS Project Planets

CIVILIZED LIFE

Mon, 2014-06-23 18:50
Lostnbronx comments upon a country store that has opened in the area, how it has grown, and what it means for the community.
Categories: FLOSS Project Planets

Script for Check Bulk IP Ping / Status / Health | Linux

Wed, 2014-06-18 22:30
Hello, Sharing Basic script which I have shared in Some LUG before few months. Sharing it here because it will help to people who wants to check PING of Bulk IP Addresses and check status and health by pinging the IP Address. Again repeating this is very basic script and many changes required to improve, […]
Categories: FLOSS Project Planets

USPTO Affirms Copyleft-ish Hack on Trademark

Wed, 2014-06-18 18:00

I don't often say good things about the USPTO, so I should take the opportunity: the trademark revocation hack to pressure the change of the name of the sports team called the Redskins was a legal hack in the same caliber as copyleft. Presumably Blackhorse deserves the credit for this hack, but the USPTO showed it was sound.

Update, 2014-06-19 & 2014-06-20: A few have commented that this isn't a hack in the way copyleft is. They have not made an argument for this, only pointed that the statue prohibits racially disparaging trademarks. I thought it would be obvious why I was calling this a copyleft-ish hack, but I guess I need to explain. Copyleft uses copyright law to pursue a social good unrelated to copyright at all: it uses copyright to promote a separate social aim — the freedom of software users. Similarly, I'm strongly suspect Blackhorse doesn't care one wit about trademarks and why they exist or even that they exist. Blackhorse is using the trademark statute to put financial pressure on an institution that is doing social harm — specifically, by reversing the financial incentives of the institution bent on harm. This is analogous to the way copyleft manipulates the financial incentives of software development toward software freedom using the copyright statute. I explain more in this comment.

Fontana's comments argue that the UPSTO press release is designed to distance itself from the TTAB's decision. Fontana's point is accurate, but the TTAB is ultimately part of the USPTO. Even if some folks at the USPTO don't like the TTAB's ruling, the USPTO is actually arguing with itself, not a third party. Fontana further pointed out in turn that the TTAB is an Article I tribunal, so there can be Executive Branch “judges” who have some level of independence. Thanks to Fontana for pointing to that research; my earlier version of this post was incorrect, and I've removed the incorrect text. (Pam Chestek, BTW, was the first to point this out, but Fontana linked to the documentation.)

Categories: FLOSS Project Planets

Updates, scripts and other stuff

Sun, 2014-06-15 12:10

Hey Guys! This post will be a bit different of the rest, it’s just an update with my last recent works on GitHub and sharing some interesting readings I’ve done. Some time ago since I updated, I uploaded these days some of the work I’ve done on github (some scripts and puppet modules), some of them It was done since some time ago, so It was some recollect of work I did and share it as usually, so it may be useful by someone else. Of course are very welcome proposals for improvements in any of the projects and new ideas as well

    • New repo sysadmin-scripts : In this repo I added some already existing scripts I had in other repositories (linked as submodules) and uploaded others that I didn’t upload yet. I decided to create this repo to recollect some useful scripts that I use and are very useful in my day to day, so I’ll keep updated adding new ones under my needs.
    • New repo puppet-manifests  : The same idea for the previous repository, but with some modules for puppet. There I upload some manifests I’ve that can be useful for very common environments. I added some submodules as well pointing to some puppet modules required in my manifests. I’ll keep updated adding more modules I’ve for puppet.
    • New script tweet-planet-reports : This is a small script I recently did for the Spanish sysadmin planet planetasysadmin.com It’s a simple script to count the number of contributions done for each blog on the planet, and send to the twitter account. Useful for people who manages rss planets, and wants to know the activity made by the blogs. Things to do, I had though about if it’s included on the reports a performance comparison respect the last report executed for each blog. Maybe if I’ve time… :D:D
    • Improved script check_http_requests : I included the next information from the access log file generated by apache2 / nginx:
      Show top 10 of source IPs.
      Show top 10 of pages requested.
      Show percentage of success and bad responses.
      Show top 5 pages requested per source IP.
    • New Perl module Redis-Interface-Client : This small module it’s just an interface for the existing Redis one, but I include some methods to work with data structures easier, and methods like replace, append or add that allows set a key under certain conditions, just check the documentation to know what does each one. In the future I would like to modify to work with more complex data structures like hash of hashes or array of arrays, I’ll see

These has been my last updates on github, now I would like to share some useful readings I done, I found some of them investigating an issue and other ones just saw on twitter / rss:

  • Good read about the TIME_WAIT tcp state, how works and why you should think before touch the sysctl parameter net.ipv4.tcp_tw_recycle. By the way a good blog to follow

http://vincent.bernat.im/en/blog/2014-tcp-time-wait-state-linux.html

  • Good optimization guide for nginx:

http://blog.zachorr.com/nginx-setup/

  • Good explanation about the differences between kmod and akmod:

https://www.foresightlinux.se/difference-akmod-kmod/

  • Good tutorials for people who is introducing to the Perl Catalyst framework:

http://www.catalystframework.org/calendar/

https://metacpan.org/pod/Catalyst::Manual::Tutorial

Well I hope the info will be useful for you and see you the next time with more interesting stuff!!

Categories: FLOSS Project Planets

Tips – Huge discount on Linux steam games

Sat, 2014-06-14 10:08

When summer arrives, many games sites starts to sell out cheap games. We have found several games that works perfectly on Linux.

All games comes from  Green Man Gaming and needs steam.

Left 4 Dead Bundle

This pack contains:

  • Left 4 Dead
  • Left 4 Dead 2

£22.99 £5.74

 

 Half Life Complete

 

This pack contains:

  • Team Fortress Classic
  • Half-Life: Opposing Force
  • Half-Life
  • Half-Life: Blue Shift
  • Half-Life 2
  • Half-Life: Source
  • Half-Life 2: Deathmatch
  • Half-Life 2: Lost Coast
  • Half-Life 2: Episode One
  • Half-Life 2: Episode Two

£26.99 £6.74

 

Valve Complete Pack

(This pack contains few games that only works on win though)

This pack contains:

  • Counter-Strike
  • Team Fortress Classic
  • Day of Defeat
  • Deathmatch Classic
  • Half-Life: Opposing Force
  • Ricochet
  • Half-Life
  • Counter-Strike: Condition Zero
  • Half-Life: Blue Shift
  • Half-Life 2
  • Counter-Strike: Source
  • Half-Life: Source
  • Day of Defeat: Source
  • Half-Life 2: Deathmatch
  • Half-Life 2: Lost Coast
  • Half-Life 2: Episode One
  • Half-Life Deathmatch: Source
  • Left 4 Dead
  • Half-Life 2: Episode Two
  • Team Fortress 2
  • Portal
  • Left 4 Dead 2
  • Portal 2
  • Counter-Strike: Global Offensive

 £49.99 £16.99

voucher code

You can also make these deals even more extreme by abusing the below 15% voucher code:

ENCORE-IFDSAL-E15OFF Code comes from GMG blog

 

Categories: FLOSS Project Planets

Node.js Removes Its CLA

Wed, 2014-06-11 15:15

I've had my disagreements with Joyent's management of the Node.js project. In fact, I am generally auto-skeptical of any Open Source and/or Free Software project run by a for-profit company. However, I also like to give credit where credit is due.

Specifically, I'd like to congratulate Joyent for making the right decision today to remove one of the major barriers to entry for contribution to the Node.js project: its CLA. In an announcement today (see section labeled “Easier Contribution”, Joyent announced Joyent no longer requires contributors to sign the CLA and will (so it seems) accept contributions simply licensed under the MIT-permissive license. In short, Node.js is, as of today, an inbound=outbound project.

While I'd prefer if Joyent would in addition switch the project to the Apache License 2.0 — or even better, the Affero GPLv3 — I realize that neither of those things are likely to happen. :) Given that, dropping the CLA is the next best outcome possible, and I'm glad it has happened.

For further reading on my positions against CLAs, please see these two older blog posts:

Categories: FLOSS Project Planets

Hard to install and use bumblebee in Foresight?

Wed, 2014-06-11 11:26

someone told me once that users thought it was hard to install and use bumblebee for Nvidia optimus cards in Foresight.

So we listened and took care of it.

Now you only need to run two commands in terminal and reboot, and you are done. No need to edit files or similar.

Read all about how you install it here: Bumblebee

There is probably no easier way to install it somewhere else either. Sure you can bake in so you run one command to make 2 things. But we believe that you can handle 2 commands  :)

Categories: FLOSS Project Planets

Cloud and Internet Security

Wed, 2014-06-11 04:55
If you've been watching this blog you may have noticed that there hasn't been a lot of activity lately. Part of this has to do with me working on other projects. One of these includes a report that I call "Cloud and Internet Security" which is basically a follow up of "Building a Cloud Computing Service" and the "Convergence Effect". If you're curious, both documents were/have been submitted to various organisations where more good can be done with them. Moreover, I consider both both works to be "WORKS IN PROGRESS" and I may make extensive alterations without reader notice. The latest versions are likely to be available here:
https://sites.google.com/site/dtbnguyen/
Cloud and Internet SecurityABSTRACT
A while back I wrote two documents called 'Building a Cloud Service' and the 'Convergence Report'. They basically documented my past experiences and detailed some of the issues that a cloud company may face as it is being built and run. Based on what had transpired since, a lot of the concepts mentioned in that particular document are becoming widely adopted and/or are trending towards them. This is a continuation of that particular document and will attempt to analyse the issues that are faced as we move towards the cloud especially with regards to security. Once again, we will use past experience, research, as well as current events trends in order to write this particular report. 
Personal experience indicates that keeping track of everything and updating large scale documents is difficult and depending on the system you use extremely cumbersome. The other thing readers have to realise is that a lot of the time even if the writer wants to write the most detailed book ever written it’s quite simply not possible. Several of my past works (something such as this particular document takes a few weeks to a few months to write depending on how much spare time I have) were written in my spare time and between work and getting an education. If I had done a more complete job they would have taken years to write and by the time I had completed the work updates in the outer world would have meant that the work would have meant that at least some of the content would have been out of date. Dare I say it, by the time that I have completed this report itself some of the content may have come to fruition as was the case with many of the technologies with the other documents? I very much see this document as a starting point rather than a complete reference for those who are interested in technology security. 
Note that the information contained in this document is not considered to be correct nor the only way in which to do things. It’s a mere guide to how the way things are and how we can improve on them. Like my previous work, it should be considered a work in progress. Also, note that this document has gone through many revisions and drafts may have gone out over time. As such, there will be concepts that may have been picked up and adopted by some organisations while others may have simply broken cover while this document was being drafted and sent out for comment. It also has a more strategic/business slant when compared to the original document which was more technically orientated. 
No illicit activity (as far as I know and have researched) was conducted during the formulation of this particular document. All information was obtained only from publicly available resources and any information or concepts that are likely to be troubling has been redacted. Any relevant vulnerabilities or flaws that were found were reported to the relevant entities in question (months have passed).
Categories: FLOSS Project Planets