FLOSS Project Planets
The GNU inetutils team is proud to present version 1.9.3 of the GNU networking utilities. The GNU Networking Utilities are the common networking utilities, clients and servers of the GNU Operating System.
The following is new in this release:
An old inability to allow other names than the canonical name has been corrected. This means that a machine entry in the .netrc file will now be used as expected. Previously any alias name was replaced by the corresponding canonical name, before reading the .netrc file.
The internal command ‘hash’ accepts a suffixed letter to the size argument, like ‘12k’, instead of 12288. Made a minor change to the syntax of the command itself, allowing size changes independently of activation of hash markings. After a transfer the summary gives the speed as ‘Mbytes/s’, ‘kbytes/s’, or ‘bytes/s’.
The .netrc file can be overridden by the environment variable NETRC. Of even higher precedence is the new option ‘-N/–netrc’. The access to the resulting file, whatever method, is now denied unless it is a regular file.
Better command line parsing on BSD and Solaris systems. Touch only changeable flags on all systems.
The ability to use numerical facilities is restored to full range.
- ping, ping6
The ability to specify a pattern as payload is corrected.
A new switch ‘-T/–local-time’ makes the service ignore a time stamp passed on by the remote host, recording instead the local time at the moment the message was received. As a short form of ‘–pidfile’, the switch ‘-P’ is new.
In common with other syslogd implementations, rsyslogd and sysklogd, there has for a long time existed an attack vector based on large facility numbers, made public in CVE-2014-3684. This is now mended in our code base.
The ability to autologin a client, without using authentication, is now functional in the expected manner, i.e., the prompt for a user name is suppressed in favour of an immediate password prompt.
In a setting where the client is using a UTF-8 encoding, it was common to observe strange characters in most responses. This was caused by the server daemon, due to incomplete purging of internal protocol data. The issue should now be resolved.
Improved cooperation with servers like ‘whois.arin.net’, ‘whois.eu’, and ‘whois.ripe.net’.
Now details about my work. The work I have done so far is related to the layers feature.
Layers feature is almost done. A list of layers is being generated in the left sidebar and toggling visibility of layers is also working.
Check out the code here.
Here is a screenshot for the layers feature:-
Before toggling layer:-
After toggling layers:-
Layers feature assume that generator provides a pointer to QAbstractItemModel representing the model for layers with support for Qt::CheckStateRole. Toggling checkbox should automatically change the visibility of layer.
Few things which are left :-
- Selection of appropriate icon in the left sidebar.
- Changing check box to icons.
- Proper working of search bar in the layers view.
mainly interesting for the dataviz and the Google-Doc-driven backend. wish they published the script though
My work project for the last year or so was shown on Google I/O! And the reaction was seemingly positive, also in the tech press.
: Of course, I'm only one of many developers.
The discussion is certainly not boring.
- Socialize Uber: It’s easier than you think. Given that the workers already own all the capital in the form of their cars, why aren’t they collecting all the profits? Worker cooperatives are difficult to start when there’s massive capital needed up front, or when it’s necessary to coordinate a lot of different types of workers. But, as we’ve already shown, that’s not the case with Uber. In fact, if any set of companies deserves to have its rentiers euthanized, it’s those of the “sharing economy,” in which management relies heavily on the individual ownership of capital, providing only coordination and branding.
- How to Socialize UberUber promises investors it will soon be making mega-profits, but it also claims those profits just represent a return on its technology and risk-bearing. Certainly the money doesn’t come from exploiting Uber’s workers. What workers? No, no — you see, the drivers are merely Uber’s business partners, and you can’t exploit your business partner.
- Once a sure bet, taxi medallions becoming unsellableIn an April letter to creditors, New York taxi commission officials and other stakeholders, Freidman's attorney, Brett Berman, called on industry regulators and medallion lenders to restructure and extend loans for his client and reform the industry.
- Billionaire hedge-fund manager says Uber told him it might cut driver pay ‘because we can'"'You've got happy employees, you've got happy customers, you've got happy shareholders. The holy triumvirate are all really excited about your company. Why are you going to risk that and push the employees salary down 5%?'"
Callinicos simply responded "because we can."
- Elizabeth Warren: No Need to Stop Uber-ized Workforce, but Must Invest in Educationshe returned to her argument, made several times during the interview, that the government’s position should pour more investment into education and infrastructure. “We have to invest in the two places where it works,” she said. “We have to invest in brains and people who are willing to do the long, long arc research.”
- This lawyer fought for FedEx drivers and strippers. Now she's standing up for Uber drivers By using contractors instead of employees, companies are not responsible for things like payroll taxes, job expenses, anti-discrimination protections or overtime pay. For bootstrapped startups, it's a cost-saving measure that can mean life or death.
But Liss-Riordan isn't drinking the same venture-capital bought kool-aid as the startups who have built businesses around the 1099 economy. Rather, she views it as another example of companies using contract workers as a way to skirt their obligations as an employers.
"I don’t believe this industry needs to be built on a system whereby the workers don’t need to receive any of the protections that we have a society that workers need to receive," she said. "I just don’t know how Uber can argue with a straight face that as a $40 billion dollar company it can’t afford to insure its drivers, pay minimum wage or pay overtime, or be reimbursed for their expenses.
- Uber Isn’t the ProblemIf drivers on the Uber platform had better options available to them, if there were jobs that offered them higher wages and better working conditions, they’d presumably have already taken them. That means that if you’re appalled by Uber, your real problem is with every other option that the drivers who use it have for earning a living—which is entirely fair. But despair over the fact that many American workers aren’t commanding the wages and working conditions we’d want for them in an ideal world doesn’t seem like a sound reason for shutting Uber down, or regulating it out of existence.
In case you missed the latest news, Jonathan Riddell has been accused by the Ubuntu Community Council (CC) of breaking Ubuntu Code of Conduct (CoC) and has been asked to resign from his position of leader of the Kubuntu project (a title which actually does not exist and which he never claimed to hold)
I had the chance of meeting Jonathan when I joined Canonical in 2009. I was a bit intimidated during my first Canonical real-life meeting, but Jonathan carried me around and went out of his way to introduce me to many of my then new colleagues.
Since then he has always been one of the friendliest person I know. We often shared rooms during Canonical, Ubuntu or KDE events and went on to be colleagues again at Blue Systems. I believe Jonathan kindness is one of the reasons why the Kubuntu community has grown into such a welcoming and closely-knit group of people.
Sometimes passion carries us over too far and we say or do things we should not, but until now all I have found is just accusations and no proof of any such behavior from Jonathan. I am certainly biased, but since breaking Ubuntu CoC is so unlike the Jonathan I know, I stand by his side. The CC should post real pointers to the repeated CoC breakage Jonathan is accused of. "Innocent until proven guilty", that is how real justice works. Publish proofs of what you claim. Until such pointers are published it all sounds like the CC is having a hard time getting precise answers to Jonathan questions and opted to get rid of him instead of pushing for more answers.
PS: Before you ask: yes, I read all the long email threads and IRC logs I could find. While I have found some rough exchanges I don't think they qualify as breaking Ubuntu CoC.
You might have already read this comment by RMS in the Guardian. That comment and a recent discussion about the relevance of GPL changes post GPLv2 made me think again about the battle RMS started to fight. While some think RMS should "retire", at least I still fail on my personal goal to not depend on non-free software and services. So for me this battle is far from over, and here is my personal list of "non-free debt" I've to pay off.general purpose systems aka your computer
Looking at the increasing list of firmware blobs required to use a GPU, wireless chipsets and more and more wired NICs, the situation seems to be worse then in the late 90s. Back then the primary issue was finding supported hardware, but the driver was free. Nowadays even the open sourced firmware often requires obscure patched compilers to build. If I look at this stuff I think the OpenBSD project got that right with the more radical position.
Oh and then there is CPU microcode. I'm not yet sure what to think about it, but in the end it's software and it's not open source. So it's non-free software running on my system.
Maybe my memory is blurred due to the fact, that the seperation of firmware from the Linux kernel, and proper firmware loading got implemented only years later. I remember the discussion about the pwc driver and its removal from Linux. Maybe the situation wasn't better at that time but the firmware was just hidden inside the Linux driver code?
On my system at work I've to add the Flash plugin to the list due to my latest test with Prezi which I'll touch later.
I also own a few Humble Indie bundles. I played parts of Osmos after a recommendation by Joey Hess, I later finished to play through Limbo and I got pretty far with Machinarium on a Windows system I still had at that time. I also tried a few others but never got far or soon lost interest.
Another thing I can not really get rid of is unrar because of stuff I need to pull from xda-developer links just to keep a cell phone running. Update: Josh Triplett pointed out that there is unar available in the Debian archive. And indeed that one works on the rar file I just extracted.Android ecosystem
I will soon get rid of a stock S3 mini and try to replace it with a moto g loaded with CyanogenMod. That leaves me with a working phone with a OS that just works because of a shitload of non-free blobs. The time and work required to get there is another story. Among others you need a new bootloader that requires a newer fastboot compared to what we have in Jessie, and later you also need the newer adb to be able to sideload the CM image. There I gave in and just downloaded the pre build SDK from Google. And there you've another binary I did not even try to build from source. Same for the CM image itself, though that's not that much different from using a GNU/Linux distribution if you ignore the trust issues.
It's hard to trust the phone I've build that way, but it's the best I can get at the moment with at least some bigger chunks of free software inside. So let's move to the applications on the phone. I do not use GooglePlay, so I rely on f-droid and freeware I can download directly from the vendor.
- AndFTP: best sftp client I could find so far
- Threema: a bit (a single one) more trustworthy then WhatsApp, they started around the company of Michael Kasper
- Wunderlist: well done shared shopping list, also non-free webservice
- Opera: the compression proxy is awesome, also kind of a non-free webservice
This category mixes a lot with the stuff listed above, most of them are not only an application, in fact Threema and Wunderlist are useless without the backend service. And Opera is just degraded to a browser - and to be replaced with Firefox - if you discount the compression proxy.
The other big addition in this category is Prezi. We tried it out at work after it got into my focus due to a post by Dave Aitel. It's kind of the poster child of non-freeness. It requires a non-free, unstable, insecure and half way deprecated browser plugin to work, you can not download your result in a useful format, you've to buy storage for your presentation at this one vendor, you've to pay if you want to keep your presentation private. It's the perfect lockin situation. But still it's very convenient, prevents a lot of common mistakes you can make when you create a presentation and they invented a new concept of presenting.
I know about impress.js(hosted on a non-free platform by the way, but at least you can export it from there) and I also know about hovercraft. I'm impressed by them, but it's still not close to the ease of use of Prezi. So here you can also very prominently see the cost of free and non-free software. Invest the time and write something cool with CSS3 and impress.js or pay Prezi to just klick yourself through. To add something about the instability - I had to use a windows laptop for presenting with Prezi because the Flash plugin on Jessie crashed in the presentation mode, I did not yet check the latest Flash update. I guess that did not make the situation worse, it already is horrible.
Thinking a bit further, a Certification Authority is not only questionable due to the whole trust issue, they also provide OCSP responder as kind of a web service. And I've already had the experience what the internet looks like when the OCSP systems of GlobalSign failed.
So there is still a lot to fight for and a lot of "personal non-free debt" to pay off.
Mother Nature apparently thinks that May is turning to April, or maybe March. But that's OK, too.
- Internet Trends 2015Consumers’ Expectation That They Can Get What They Want With Ease & Speed Will Continue to Rise...
This Changes Fundamental Underpinnings of Business & Can Create Rising Demand for Flexible Workers
- Using Computer Vision to Increase the Research Potential of Photo ArchivesCollaborating with the Frick Art Reference Library I utilized TinEye’s MatchEngine image similarity service and developed software to analyze images of anonymous Italian art in their photo archive. The result was extremely exciting: it was able to automatically find similar images which weren’t previously known and confirm existing relationships. Analysis of some of the limitations of image similarity technology was also conducted.
- A Toolkit to Measure Basic System Performance and OS JitterTo complement the great information I got on the “Systematic Way to Find Linux Jitter”, I have created a toolkit that I now used to evaluate current and future trading platforms.
In case this can be useful, I have listed these tools, as well as the URLs to get the source code and a description of their usage. I am learning a lot by reading the source code, and the blog entry associated.
- Systematic Process to Reduce Linux OS JitterBased on empirical evidence (across many tens of sites thus far) and note-comparing with others, I use a list of "usual suspects" that I blame whenever they are not set to my liking and system-level hiccups are detected. Getting these settings right from the start often saves a bunch of playing around (and no, there is no "priority" to this - you should set them all right before looking for more advice...).
- New C++ experimental feature: The tadpole operatorsVisual Studio 2015 RC contains a pair of experimental operators, nicknamed tadpole operators. They let you add and subtract one from an integer value without needing parentheses.
- The tadpole operators explainedThe __ENABLE_EXPERIMENTAL_TADPOLE_OPERATORS is just a red herring.
- The CounselorIn March 2015, I&A commissioned authors to write a series of narratives that investigated near future concerns around intelligent systems in warfare, urban design, medicine, and labor. These stories served as the centerpiece of a two-day intensive forum bringing together participants to identify the core set of challenges that consistently arise in deploying intelligent systems regardless of arena. "The Counselor," by Robin Sloan (Mr. Penumbra's 24-Hour Bookstore) focuses on the persuasive qualities of these systems in the medical context.
- Is there a way with Git to make future merges ignore version number difference in a pom file between branches? I am trying to find a way to make Git ignore pom version differences between branches. This works well in Perforce and I'm not having any luck reproducing the behavior with Git.
- Post Traumatic Crash Disorder & the 1962 Flash CrashThe enormous impact of 1929 and the Great Depression had outsize and lasting effects that haunted investors decades later.
- Soccer SuperpowerThe economic might of the United States in international soccer has indeed been realized: American sponsors, broadcasters, marketers, and apparel companies have funneled billions of dollars into the game; a pro league is thriving; millions of Americans watch European matches on television; and hundreds of thousands will attend European exhibition games on U.S. soil this summer. But the soccer cronies who have fed at the revenue trough didn’t anticipate the consequences of courting all that American money: It gave America the power to bring them down.
- Everything You Need to Know About FIFA’s Corruption ScandalThe Justice Department’s announcement primarily cites deals between FIFA, sports marketing groups, and broadcast corporations for the television rights to air the World Cup and other international soccer tournaments. Dating back to 1991, the indictment alleges, those involved conspired to receive bribes from marketing firms in exchange for exclusive television contracts—to the cumulative tune of more than $150 million. As Attorney General Loretta Lynch stated, “It spans at least two generations of soccer officials who, as alleged, have abused their positions of trust to acquire millions of dollars in bribes and kickbacks.”
For the first time in the 30 years that I've lived in Northern California, the Golden State Warriors are in the NBA finals!
The series against Houston was funny: there were 3 very close games and 2 massive blowouts. For some reason it felt like the series against Memphis was more challenging for the Warriors; I think that was partly because Houston was exhausted after the effort they had to produce to get past the Clippers, and partly because the Warriors, being a very versatile team, match up better against Houston, who have trouble adjusting as rapidly to different styles as do the Warriors.
Now comes their hardest test, playing the Cleveland Cavaliers and LeBron James, who is as dominant and magnificent an individual player as has ever existed in the NBA, at least since Wilt Chamberlain was playing. (Pretty-well-known trivia fact: when Wilt Chamberlain scored 100 points that strange night in Hershey Pennsylvania, he was playing on the (Philadelphia) Warriors team.)
Although the media will be all about Stephen Curry vs. LeBron James, that's not really how the finals will play out.
The real issue will be LeBron James versus the Warriors defense, which means the players you need to start getting familiar with are the ones you might not have paid so much attention to: Draymond Green, Andre Iguadala, Harrison Barnes, Festus Ezeli, Klay Thompson.
There is no single player who can come close to guarding LeBron James, but the Warriors don't play defense with a single player, so I think there is hope.
We'll have to wait a week to see; the finals start Thursday, June 4!
This is an overview of a planned series of articles on the new features that are available in Fediz 1.2.0, which is a new major release of the project. Subsequent articles will go into more detail on the new features, which are as follows:
- Dependency update to use CXF 3.0.x (3.0.4).
- A new container-independent CXF-based plugin is available.
- Logout Support has been added to the plugins and IdP
- A new REST API is available for configuring the IdP
- Support for authenticating to the IdP using Kerberos has been added
- Support for authenticating to the IdP using a client certificate has been added
- It is now possible to use the IdP as an identity broker with a SAML SSO IdP
- Metadata support has been added for the plugins and IdP
Packt Free Learning Forever event is on, check it out. Great way to get some really good books to expand your library and learn some useful new skills.
On Tuesday, Ben Darnell released Tornado 4.2, with two new modules: tornado.locks and tornado.queues. These new modules help you coordinate Tornado's asynchronous coroutines with patterns familiar from multi-threaded programming.
I originally developed these features in my Toro package, which I began almost three years ago, and I'm honored that Ben has adopted my code into Tornado's core. It's a bit sad, though, because this is the end of the line for Toro, one of the best ideas of my career. Skip to the bottom for my thoughts on Toro's retirement.
The classes Condition and Queue are representative of Tornado's new features. Here's how one coroutine signals another, using a Condition:condition = locks.Condition() @gen.coroutine def waiter(): print("I'll wait right here") yield condition.wait() # Yield a Future. print("I'm done waiting") @gen.coroutine def notifier(): print("About to notify") condition.notify() print("Done notifying") @gen.coroutine def runner(): # Yield two Futures; wait for waiter() and notifier() to finish. yield [waiter(), notifier()] io_loop.run_sync(runner)
This script prints:I'll wait right here About to notify Done notifying I'm done waiting
As you can see, the Condition interface is close to the Python standard library's Condition. But instead of coordinating threads, Tornado's Condition coordinates asynchronous coroutines.
Tornado's Queue is similarly analogous to the standard Queue:q = queues.Queue(maxsize=2) @gen.coroutine def consumer(): while True: item = yield q.get() try: print('Doing work on %s' % item) yield gen.sleep(0.01) finally: q.task_done() @gen.coroutine def producer(): for item in range(5): yield q.put(item) print('Put %s' % item) @gen.coroutine def main(): consumer() # Start consumer. yield producer() # Wait for producer to put all tasks. yield q.join() # Wait for consumer to finish all tasks. print('Done') io_loop.run_sync(main)
This will print:Put 0 Put 1 Put 2 Doing work on 0 Doing work on 1 Put 3 Doing work on 2 Put 4 Doing work on 3 Doing work on 4 Done
Tornado's new locks and queues implement the same familiar patterns we've used for decades to coordinate threads. There's no need to invent these techniques anew for coroutines.
I was inspired to write these classes in 2012, when I was deep in the initial implementation of Motor, my MongoDB driver for Tornado. The time I spent learning about coroutines for Motor's sake provoked me to wonder, how far could I push them? How much of the threading API was applicable to coroutines? The outcome was Toro—not necessarily evidence of my genius, but a very good idea that led me far. Toro's scope was straightforward, and I had to make very few decisions. The initial implementation took a week or two. I commissioned the cute bull character from Musho Rodney Alan Greenblat. The cuteness of Musho's art matched the simplicity of Toro's purpose.
When I heard about Guido van Rossum's Tulip project at his PyCon talk in 2013, I thought he could use Toro's locks and queues. It would be an excuse for me to work with Guido. I found that Tulip already had locks, implemented by Nikolay Kim if I remember right, but it didn't have queues yet so I jumped in and contributed mine. It was a chance to be code-reviewed by Guido and other Python core developers. In the long run, when Tulip became the asyncio standard library module, my queues became my first big contribution to the Python standard library.
Toro has led me to collaborate with Guido van Rossum and Ben Darnell, two of the coders I admire most. And now Toro's life is over. Its code is split up and merged into much larger and better-known projects. The name "Toro" and the character are relics. When I find the time I'll post the deprecation notice and direct people to use the locks and queues in Tornado core. Toro was the most productive idea of my career. Now I'm waiting for the next one.
I am working on the background material to support tutorial sessions. If there's one hard thing about giving a tutorial, it's getting everyone on the same page without anyone being left behind. All of a sudden, you need to think about *everyone's* environment. Not just yours -- not even just 'most people's, but everyone's.
There are a few technologies for setting up environments, plus some entirely different approaches completely. My goal is to present people with multiple paths to success, without having to think of everything.
I'll be looking at:
-- Virtualenv and Python packages
-- Virtual machine images
-- Vagrant automatic VM provisioning
-- Alternative Python distributions
-- Using web-based environments rather than your own installation
Why is this all so complicated? Well, without pointing any fingers, Python package alone won't get the job done for scientific packages. There's no getting around the fact that you will need to install some packages into the base operating system, and there is no good, well-supported path to make that easy. Particularly if you would prefer to do this without modifying the base system. Then, there's the layer of being able to help a room full of people all get the job done in about twenty minutes. Even with a room full of developers, it's going to be a challenge.
Let's take a tour of the options.
One -- Virtualenv and Python PackagesThis option is the most 'pythonic' but also, by far, the least likely to get the job done. The reason is going to be the dependency on complex scientific libraries, which the user is then going to have to hand install by following through a list of instructions. It's doable, but I won't know up front what the differences between say yum and apt are going to be, let alone the potential differences between operating system versions might be. Then, there will be some users on OSX (hopefully using either macports or brew) and potentially some on Windows. In my experience, there are naming differences between package names across those systems, and at times there may be critical gaps or versioning differences. Furthermore, the relevant Python 3 vs 2.7 packages may differ. It is basically just too hard to use Python's inbuilt packaging mechanism to handle a whole room full of individual development platform differences.Two -- Virtual Machines ImagesThis approach is fairly reliable, but feels kind of clumsy to me, and isn't necessarily very repeatable. While not everyone is going to have Virtualbox (or any other major virtualiser) installed, most people will be able to use this technology on their systems. There may be a few who will need to install Virtualbox, but from there it really should 'just work'.
VM files can be shared with USB keys or over the network. So long as you bring a long a good number of keys it should be mostly okay. A good tip though -- bring along keys of a couple of different brands. I have had firsthand experience of specific brands of USB key and computer just not getting along.
The downside is that while this will work in a tutorial setting, virtual machines can be slow, and don't necessarily set up the attendees with the technology they should be using going forward. They may find themselves left short of being able to work effectively in their own environments later.Three -- Vagrant Automatic VM ProvisioningThe level up from supplying a base VM is using Vagrant (www.vagrantup.com). It allows you to specify the configuration of the base machine and its packages through a configuration file, so the only thing you need to share with people is a simple file. Rather than having to share large virtual machine files, which are also hard to version, you can share a simple configuration file only. That's something than can be easily versioned, and is lightweight to send around. The only downside is that each attendee will need to download the base VM image through the Vagrant system, which will hit the local network. Running a tutorial is an exercise in digital survivalism. It's best not to rely on any aspect of supporting technology.
I have also had some a lot of trouble trying to install Vagrant boxes. I'm not really sure what the source of the issues was. I'm not really sure why it started working either. I just know I'm not going to trust it in a live tutorial environment. Crossing it off for now.
Four -- Alternative Python DistributionsThis could be a really good option. The two main distributions that I'm aware of are Python(x,y) and Anaconda. Both seem solid, but Anaconda probably has more mindshare, particularly for scientific packages. For the purposes of machine learning and data science, that is going to be very relevant. Many people support using the Anaconda distribution by default, but that's not my first option.
I would recommend Anaconda in corporate environments, where it's useful to have a singular, multi-capable installation which is going to be installable onto enterprise linux distributions but still have all the relevant scientific libraries. I would recommend against it on your own machine, because despite its general excellence, I have found occasional issues when trying to install niche packages.
Anaconda would probably also be good in the data centre or in a cloud environment, where things can also get a little wild when installing software. It's probably a good choice for system administrators as well. Firstly, installed packages won't interfere with the system Python. Secondly, it allows users to create user-space deployments with their own libraries that are isolated from the central libraries. This helps with managing robustness. Standard Python will do this with the 'virtualenv' package, so there are multiple ways to achieve this goal.
Using web-based environments rather than your own installationThis is really about side-stepping the issue, rather than fixing it as such. It's not free from pitfalls, because there are still browser incompatibilities to consider. However, if your audience can't manage to have an up-to-date version of either Firefox or Chrome installed, then things are likely to be tricky. Also, up-to-date versions of Internet Explorer are also likely to work, however I haven't tested it to any degree. You'll also need to understand the local networking environment to make sure you can run a server and that attendees are going to be able to access it. You could host an ad-hoc network on your own hardware, but I'm a bit nervous about that approach.
Perhaps, if I have some spare hardware, I'll expose something through a web server. Another alternative is to demonstrate (for example) the Kaggle scripting environment.
ConclusionsI think I have talked myself around to providing a virtual machine image via USB keys. I can build the environment on my own machine, verify exactly how it is set up, then provide something controlled to participants.
In addition, I'll supply the list of packages that are in use, so that people can install them directly onto their own system if desired. This will be particularly relevant to those looking to exploit their GPUs to the maximum.
Finally, I'll include a demo of the Kaggle scripting environment for those who don't really have an appropriate platform themselves.
I'd appreciate any comments from anyone who has run or attended tutorials who has any opinion about how best to get everyone up and running...
The MetadataService class is available on a "metadata" path and provides a single @GET method that returns the service metadata in XML format. It has the following properties which should be configured:
- String serviceAddress - The URL of the service
- String assertionConsumerServiceAddress - The URL of the RACS. If it is co-located with the service, then it can be the same URL as for the serviceAddress.
- String logoutServiceAddress - The URL of the logout service (if available).
- boolean addEndpointAddressToContext - Whether to add the full endpoint address to the values configured above. The default is false.
Barcelona is one of the most cosmopolitan cities in Europe and we want to infer the same spirit for the Coding and Development track. This is an exciting moment in the web development industry and we plan to take advantage of that. The DrupalCon Barcelona Coding and Development track is focused on preparing developers for the future of the web development universe.
Any morning starting with DrupalTour journey is great ;) Especially if the point of destination is only 70km away. This time we visited Rivne, welcome on board!
We decided to slightly change the concept of the event. It’s much more convenient to communicate with visitors in calm atmosphere of conference halls — we all can concentrate purely on reports and questions.Read more
In the past three posts we've looked at how the Charlottesville Expense Budget might be made more transparent through maps and visualizations. The final stage in our process is getting the data we've collected added to a larger data set - in this case we're going to use openspending.org as a repository for our work. This means that even if this site fails (gasp) the work we do will not be lost, and may contribute to a larger and greater good (perhaps a stretch right now, whatev).
Using views data export we can create a CSV file that has all the needed fields for openspending.org - they're a pretty flexible group Amount and Date are the only required fields, we've added Merchant (thanks to the SCC data for that) as well as categories and locations (thanks to John Pfaltz for that).
Jono Bacon, Stuart Langridge and myself present Bad Voltage, in which Bryan is sadly unavoidably absent, we discuss relationships between the Ubuntu and Kubuntu community councils, we ask you to tell us which bits you like, there are once again accusations that eating yoghurt is a bad personality trait, and:
- 00:01:57 Bad Voltage Fixes the F$*%ing World: we pick a technology or company or thing that we think isn’t doing what it should be, and discuss what it should be doing instead. In this first iteration, we talk about Mozilla
- 00:28:40 Meditation is reputedly a good way to relieve stress and stay centred, and we look at HeadSpace.com who offer a purchasable digital set of meditation tapes and guidebooks, as well as some brief diversions into the nature of relaxation and the voice of Jeff Bridges
- 00:44:45 Rick Spencer, Canonical’s VP of Ubuntu engineering and services, talks about Canonical’s focus, the recent announcements around phones and “internet of things” devices, and how community feelings about Ubuntu’s direction dovetail with Canonical’s goals
- 01:06:12 We’ve talked about 3d printers in the past, in the context of you owning one, but there are online services which allow you to upload a 3d design and then will print it in a variety of materials and send it back to you in the post. Could this be the way that 3d printing really reaches the mainstream?
Listen to 1×43: Got The Om On
As mentioned here, Bad Voltage is a project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.
A problem we hear about quite often is a huge cache_form table. Drupal by default caches all form data (typically for 6 hours). During peak traffic times, that can be a lot of form data being cached. When the form cache is cleared during Drupal's cron, the following DELETE query is used:DELETE FROM cache_form WHERE (expire <> 0) AND (expire < UNIX_TIMESTAMP(NOW()));
However, InnoDB does not release disk space back to the file system on DELETE queries, and that can easily lead to a very large .ibd file for that table.
The three most common "fixes" we see being discussed are:
- mysqldump - Many people recommend exporting the table via mysqldump and then importing it again. While this does reclaim the disk space, your cache_form table will be locked during the dump and the import, and any users who filled out forms during that small time gap in between the dump and the import will have their form data lost.
- OPTIMIZE TABLE (or ALTER TABLE "Engine=InnoDB") - Another recommendation often seen is to run OPTIMIZE TABLE on the cache_form table. However, this will also negatively impact your users by locking the cache_form table during the process. The ALTER TABLE is also mentioned here because it essentially does the same thing on InnoDB.
- TRUNCATE - Probably the most common recommendation we see is to simply TRUNCATE the cache_form table. That will instantly free up all of the disk space that table is using, but it will also delete ALL cached form data.
Fortunately, Percona Toolkit contains a great tool to do online schema changes, and it even works if you are running a cluster. We have a client whose cache_form table legitimately grows to ~20GB during peak traffic times, but their site has almost no form usage outside of normal business hours. Running the following as a maintenance task just before the start of the day when there is the least amount of data that legitimately resides in cache_form typically results in a shrinkage down to less than 50MB:pt-online-schema-change --host 127.0.0.1 --port 3306 --user bender --alter "ENGINE=InnoDB" D=database,t=cache_form --execute
This runs an ALTER TABLE, but with a few extra things going on under the hood. Instead of simply copying the table, it also creates triggers on the original table so that any new data being written to the original table are also written to the new table. When the process is done, it then atomically renames the tables and drops the original.
Oh, man. After coming back from DrupalCon Los Angeles with my team at ThinkShout, my brain still hurts... but in a good way. You know, in that same way you feel after binge watching Dr. Who on Netflix leading up to the season finale where The Doctor and Clara… oops. No spoilers. Point being: it’s a feeling of intense brain explosion followed by inspiration and motivation to get up and kick some ass.
Besides attending some fantastic design and UX sessions, I also presented my own session about designing on a budget and led a BoF about design and prototyping in Drupal. I made a lot of awesome new friends and reconnected with some old ones as well.
All in all, it was a great conference and LA was a fantastic host city. My team from ThinkShout and I covered a lot of ground - we found some great bars and restaurants, including my favorite stop, Grand Central Market (where I had an amazing octopus tostada from La Tostaderia).
But anyway, you’re not here to read about bars and restaurants in LA. Let’s talk Design at DrupalCon.
One thing that’s really impressed me about DrupalCon is just how much the breadth of UX and Design content has expanded over the last few years. This was my fourth DrupalCon: my first was 2011 in Chicago, followed by ‘13 in Portland and ‘14 in Austin. I was excited to see there were several sessions in the UX and Design track this year that were less technical, and more about design thinking and problem solving. Personally, I enjoy these kind of talks because I tend to walk away with some new insights into my own process. Design is not the same thing as Front End Development, and DrupalCon is finally realizing and embracing this.Common Design Themes from DrupalCon LA
Attending DrupalCon is a great way to get your finger on the pulse of what’s happening in the Drupal community. I’ve noticed a lot of changing trends in the years since my first DrupalCon, starting with the prevalence of responsive design. In the first DrupalCon session I gave in Portland in 2013, only about half the room raised their hands when I asked how many had worked on a responsive Drupal site. This year, nearly everyone raised their hand when I asked the same question. At this point, responsive design is implicit when designing a new Drupal site.
Here are some other design and UX trends from this year’s DrupalCon:
People are applying design thinking about much more than how a website looks. Megan Erin Miller, a designer at Stanford University, gave a fantastic talk about using Service Design to design end-to-end experiences. According to Erin, "designing a website is not just about designing good user experience. It’s about designing new processes, new identities, and new partnerships." I couldn’t agree more. Erin compared the process of building a website to designing a theme park. When you go to a theme park, your experience isn’t just about what happens when you ride the roller coaster. A good theme park experience starts when you see an ad on TV or get an email offer and book your trip online or over the phone. When you get to the theme park, your whole experience is planned and designed, from the moment you walk in the gate, to when you queue up in line for the roller coaster, and on to dining and buying souvenirs. As designers, we should be thinking of our websites as products and how our users interact with these products from all possible channels, and not just what browser they’re using on what device. I’m always looking for new perspectives for my personal design process, and I hope to use some service design techniques in my client work.
Components, components, and more components! Just as the topic of "responsive design" dominated DrupalCon’s design sessions in years past, this year’s hot topic was component-based design. As websites and web apps get more complex and responsive, design needs to be streamlined and simplified. One way we can do that is to design modularly. Gone are the days of creating unique layouts for every page on a website (phew!). Instead, we need to be creating design systems that can be applied efficiently across the entire responsive website. Two great component-based design sessions from DrupalCon LA were, “The New Design Workflow,” and “To The Pattern Lab!” In addition to these sessions, I also attended a very informative BoF about CSS Style Guides lead by the one and only John Albin Wilkins. At ThinkShout, we already take a component-based approach to design, but I certainly learned some great new ideas and techniques!
Longform Storytelling is the new black. As Drupal shifts its focus towards publishing and news, content creators are embracing rich longform articles and stories. According to Kristina Bjoran and Courtney Clark of Forum One, "People generally learn more and remember more when more of their senses are engaged by a story. Stories that include images get about twice the engagement as text-only stories. Stories told with visual elements are instantly captivating. The more senses that are engaged, the more emotions will be engaged and the more memorable the experience will be." We are including longform content features in many of our new client’s websites at ThinkShout, and it was really great to hear how other industry leaders do it successfully as well.
Each year, the Drupal Association puts more thought into diversifying the session lineup, and it shows. There was a very conscious effort to get more design and UX content, as well as speakers from diverse experiences and backgrounds. To that, I say "Huzzah!" As someone who’s been designing Drupal sites for many years now, it’s great to see the design process being treated as more than just “making it look pretty.” Design and UX is now a core component of DrupalCon, and I’m proud to have helped along the way.
After a week of learning and sharing new ideas, meeting amazing people, and eating some darn good Mexican food, my brain is full and my heart is heavy. I can’t wait to see you all next year!