FLOSS Project Planets

Vasudev Ram: PDF in a Net, with Netius, a pure Python network library

Planet Python - 3 hours 45 min ago

By Vasudev Ram


I came across Netius, a pure Python network library, recently.

Excerpt from the Netius home page:

[ Netius is a Python network library that can be used for the rapid creation of asynchronous non-blocking servers and clients. It has no dependencies, it's cross-platform, and brings some sample netius-powered servers out of the box, namely a production-ready WSGI server. ]

Note: They mention some limitations of the async feature. Check the Netius home page for more on that.

To try out netius a little (not the async features, yet), I modified their example WSGI server program to serve a PDF of some hard-coded text, generated by xtopdf, my PDF creation library / toolkit.

The server, netius_pdf_server.py, running on port 8080, generates and writes to disk, a PDF of some text, and then reads back that PDF from disk, and serves it to the client.

The client, netius_pdf_client.py, uses the requests Python HTTP library to make a request to that server, gets the PDF file in the response, and writes it to disk.

Note: this is proof-of-concept code, without much error handling or refinement. But I did run it and it worked.

Here is the code for the server:
# test_netius_wsgi_server.py

import time
from PDFWriter import PDFWriter
import netius.servers

def get_pdf():
pw = PDFWriter('hello-from-netius.pdf')
pw.setFont('Courier', 10)
pw.setHeader('PDF generated by xtopdf, a PDF library for Python')
pw.setFooter('Using netius Python network library, at {}'.format(time.ctime()))
pw.writeLine('Hello world! This is a test PDF served by Netius, ')
pw.writeLine('a Python networking library; PDF created with the help ')
pw.writeLine('of xtopdf, a Python library for PDF creation.')
pw.close()
pdf_fil = open('hello-from-netius.pdf', 'rb')
pdf_str = pdf_fil.read()
pdf_len = len(pdf_str)
pdf_fil.close()
return pdf_len, pdf_str

def app(environ, start_response):
status = "200 OK"
content_len, contents = get_pdf()
headers = (
("Content-Length", content_len),
("Content-type", "application/pdf"),
("Connection", "keep-alive")
)
start_response(status, headers)
yield contents

server = netius.servers.WSGIServer(app = app)
server.serve(port = 8080)
In my next post, I'll show the code for the client, and the output.

You may also like to see my earlier posts on similar lines, about generating and serving PDF content using other Python web frameworks:

PDF in a Bottle , PDF in a Flask and PDF in a CherryPy.

The image at the top of this post is of Chinese fishing nets, a tourist attraction found in Kochi (formerly called Cochin), Kerala.

- Enjoy.

- Vasudev Ram - Dancing Bison Enterprises

Signup for email about new products from me.

Contact Page

Share | Vasudev Ram
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-10-30

Planet Apache - Thu, 2014-10-30 18:58
  • IT Change Management

    Stephanie Dean on Amazon’s approach to CMs. This is solid gold advice for any company planning to institute a sensible technical change management process

    (tags: ops tech process changes change-management bureaucracy amazon stephanie-dean infrastructure)

  • Stephanie Dean on event management and incident response

    I asked around my ex-Amazon mates on twitter about good docs on incident response practices outside the “iron curtain”, and they pointed me at this blog (which I didn’t realise existed). Stephanie Dean was the front-line ops manager for Amazon for many years, over the time where they basically *fixed* their availability problems. She since moved on to Facebook, Demonware, and Twitter. She really knows her stuff and this blog is FULL of great details of how they ran (and still run) front-line ops teams in Amazon.

    (tags: ops incident-response outages event-management amazon stephanie-dean techops tos sev1)

  • RICON 2014: CRDTs

    Carlos Baquero presents several operation, state-based CRDTs for use in AP systems like Voldemort and Riak

    (tags: ap cap-theorem crdts ricon carlos-baquero data-structures distcomp)

  • Brownout: building more robust cloud applications

    Applications can saturate – i.e. become unable to serve users in a timely manner. Some users may experience high latencies, while others may not receive any service at all. The authors argue that it is better to downgrade the user experience and continue serving a larger number of clients with reasonable latency. “We define a cloud application as brownout compliant if it can gradually downgrade user experience to avoid saturation.” This is actually very reminiscent of circuit breakers, as described in Nygard’s ‘Release It!’ and popularized by Netflix. If you’re already designing with circuit breakers, you’ve probably got all the pieces you need to add brownout support to your application relatively easily. “Our work borrows from the concept of brownout in electrical grids. Brownouts are an intentional voltage drop often used to prevent blackouts through load reduction in case of emergency. In such a situation, incandescent light bulbs dim, hence originating the term.” “To lower the maintenance effort, brownouts should be automatically triggered. This enables cloud applications to rapidly and robustly avoid saturation due to unexpected environmental changes, lowering the burden on human operators.” This is really similar to the Circuit Breaker pattern — in fact it feels to me like a variation on that, driven by measured latencies of operations/requests. See also http://blog.acolyer.org/2014/10/27/improving-cloud-service-resilience-using-brownout-aware-load-balancing/ .

    (tags: circuit-breaker patterns brownout robustness reliability load latencies degradation)

  • Photographs of Sellafield nuclear plant prompt fears over radioactive risk

    “Slow-motion Chernobyl”, as Greenpeace are calling it. You thought legacy code was a problem? try legacy Magnox fuel rods.

    Previously unseen pictures of two storage ponds containing hundreds of highly radioactive fuel rods at the Sellafield nuclear plant show cracked concrete, seagulls bathing in the water and weeds growing around derelict machinery. But a spokesman for owners Sellafield Ltd said the 60-year-old ponds will not be cleaned up for decades, despite concern that they are in a dangerous state and could cause a large release of radioactive material if they are allowed to deteriorate further. “The concrete is in dreadful condition, degraded and fractured, and if the ponds drain, the Magnox fuel will ignite and that would lead to a massive release of radioactive material,” nuclear safety expert John Large told the Ecologist magazine. “I am very disturbed at the run-down condition of the structures and support services. In my opinion there is a significant risk that the system could fail.

    (tags: energy environment nuclear uk sellafield magnox seagulls time long-now)

  • The man who made a game to change the world

    An interview with Richard Bartle, the creator of MUD, back in 1978.

    Perceiving the different ways in which players approached the game led Bartle to consider whether MMO players could be classified according to type. “A group of admins was having an argument about what people wanted out of a MUD in about 1990,” he recalls. “This began a 200-long email chain over a period of six months. Eventually I went through everybody’s answers and categorised them. I discovered there were four types of MMO player. I published some short versions of them then, when the journal of MUD research came out I wrote it up as a paper.” The so-called Bartle test, which classifies MMO players as Achievers, Explorers, Socialisers or Killers (or a mixture thereof) according to their play-style remains in widespread use today. Bartle believes that you need a healthy mix of all dominant types in order to maintain a successful MMO ecosystem. “If you have a game full of Achievers (players for whom advancement through a game is the primary goal) the people who arrive at the bottom level won’t continue to play because everyone is better than them,” he explains. “This removes the bottom tier and, over time, all of the bottom tiers leave through irritation. But if you have Socialisers in the mix they don’t care about levelling up and all of that. So the lowest Achievers can look down on the Socialisers and the Socialisers don’t care. If you’re just making the game for Achievers it will corrode from the bottom. All MMOs have this insulating layer, even if the developers don’t understand why it’s there.”

    (tags: mmo mud gaming history internet richard-bartle)

  • Testing fork time on AWS/Xen infrastructure

    Redis uses forking to perform persistence flushes, which means that once every 30 minutes it performs like crap (and kills the 99th percentile latency). Given this, various Redis people have been benchmarking fork() times on various Xen platforms, since Xen has a crappy fork() implementation

    (tags: fork xen redis bugs performance latency p99)

  • A Teenager Gets Grilled By Her Dad About Why She’s Not That Into Coding

    Jay Rosen interviews his 17-year-old daughter. it’s pretty eye-opening. Got to start them early!

    (tags: culture tech coding girls women feminism teenagers school jay-rosen stem)

Categories: FLOSS Project Planets

PDF in a Net, with Netius, a pure Python network library

LinuxPlanet - Thu, 2014-10-30 18:26

By Vasudev Ram


I came across Netius, a pure Python network library, recently.

Excerpt from the Netius home page:

[ Netius is a Python network library that can be used for the rapid creation of asynchronous non-blocking servers and clients. It has no dependencies, it's cross-platform, and brings some sample netius-powered servers out of the box, namely a production-ready WSGI server. ]

Note: They mention some limitations of the async feature. Check the Netius home page for more on that.

To try out netius a little (not the async features, yet), I modified their example WSGI server program to serve a PDF of some hard-coded text, generated by xtopdf, my PDF creation library / toolkit.

The server, netius_pdf_server.py, running on port 8080, generates and writes to disk, a PDF of some text, and then reads back that PDF from disk, and serves it to the client.

The client, netius_pdf_client.py, uses the requests Python HTTP library to make a request to that server, gets the PDF file in the response, and writes it to disk.

Note: this is proof-of-concept code, without much error handling or refinement. But I did run it and it worked.

Here is the code for the server:
# test_netius_wsgi_server.py

import time
from PDFWriter import PDFWriter
import netius.servers

def get_pdf():
pw = PDFWriter('hello-from-netius.pdf')
pw.setFont('Courier', 10)
pw.setHeader('PDF generated by xtopdf, a PDF library for Python')
pw.setFooter('Using netius Python network library, at {}'.format(time.ctime()))
pw.writeLine('Hello world! This is a test PDF served by Netius, ')
pw.writeLine('a Python networking library; PDF created with the help ')
pw.writeLine('of xtopdf, a Python library for PDF creation.')
pw.close()
pdf_fil = open('hello-from-netius.pdf', 'rb')
pdf_str = pdf_fil.read()
pdf_len = len(pdf_str)
pdf_fil.close()
return pdf_len, pdf_str

def app(environ, start_response):
status = "200 OK"
content_len, contents = get_pdf()
headers = (
("Content-Length", content_len),
("Content-type", "application/pdf"),
("Connection", "keep-alive")
)
start_response(status, headers)
yield contents

server = netius.servers.WSGIServer(app = app)
server.serve(port = 8080)
In my next post, I'll show the code for the client, and the output.

You may also like to see my earlier posts on similar lines, about generating and serving PDF content using other Python web frameworks:

PDF in a Bottle , PDF in a Flask and PDF in a CherryPy.

The image at the top of this post is of Chinese fishing nets, a tourist attraction found in Kochi (formerly called Cochin), Kerala.

- Enjoy.

- Vasudev Ram - Dancing Bison Enterprises

Signup for email about new products from me.

Contact Page

Share | var addthis_config = {"data_track_clickback":true}; Vasudev Ram

Categories: FLOSS Project Planets

FSF Blogs: Friday Free Software Directory IRC meetup: October 31

GNU Planet! - Thu, 2014-10-30 16:42

Join the FSF and friends on Friday, October 31, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.


Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.


While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!


If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

Friday Free Software Directory IRC meetup: October 31

FSF Blogs - Thu, 2014-10-30 16:42

Join the FSF and friends on Friday, October 31, from 2pm to 5pm EDT (18:00 to 21:00 UTC) to help improve the Free Software Directory by adding new entries and updating existing ones. We will be on IRC in the #fsf channel on freenode.


Tens of thousands of people visit directory.fsf.org each month to discover free software. Each entry in the Directory contains a wealth of useful information, from basic category and descriptions, to providing detailed info about version control, IRC channels, documentation, and licensing info that has been carefully checked by FSF staff and trained volunteers.


While the Free Software Directory has been and continues to be a great resource to the world over the past decade, it has the potential of being a resource of even greater value. But it needs your help!


If you are eager to help and you can't wait or are simply unable to make it onto IRC on Friday, our participation guide will provide you with all the information you need to get started on helping the Directory today!

Categories: FLOSS Project Planets

FSF Blogs: Interview with Jessica Tallon of PyPump

GNU Planet! - Thu, 2014-10-30 15:58

In this edition, we conducted an email-based interview with Jessica Tallon, the lead developer PyPump, a simple but powerful and pythonic way of interfacing with the pump.io API, which is licensed under the terms of the GNU General Public License version 3 or, at your option, any later version (GPLv3+).

What inspired you to create PyPump?

I began working on PyPump when Evan Prodromou launched the first pump.io servers. Although Status.net had existed before pump.io, I wasn't a user and the only social networks I used were centralized, proprietary ones which really clashed with my views on software freedom and the federated web. I wanted to be able to interact with pump without having to use a browser. The API was easy to understand, so I tried to see if I could put together a basic library.

How are people using it?

There are several interesting projects out there which use PyPump. With my day job as a GNU MediaGoblin developer, we're going to be using it as a way of communicating between servers as a part of our federation effort. A great use I've seen is PumpMigrate, which will migrate one pump.io account to another. Another little utility that I wrote over the course of a weekend is p, which was made to be an easy way of making a quick post, bulk uploading photos, or anything you can script with the shell.

What features do you think really sets PyPump apart from similar software?

One thing PyPump does particularly well is being pythonic. We've written PyPump to be as natural for the python developer to work with as possible. Hopefully, that will lower the bar of entry for developers, as they won't have to read through the pump.io API documentation or be intimately familiar with Activity Streams in order to write great applications that can interact with pump.io.

Why did you choose the GNU GPLv3+ as PyPump's license?

This is actually something I get asked a lot and something which I have spent a lot of time thinking about. PyPump is a library and most people expected me to release it under the GNU LGPLv3. The reason I went with the GNU GPLv3 is that I believe that all software, regardless of size, should be free - so that we can all learn, build, fix, and use the software in whatever way we see fit. GPLv3 gives everyone the protection against someone coming along and using all that great work and writing proprietary code against it.

How can users (technical or otherwise) help contribute to PyPump?

There is so much people can do to help. We would love to get help both on the library, as we're currently working towards our 0.6 release, as well as on documentation. With PyPump being a library, we want to make sure that we have accessible, good quality documentation so that the people who want to use PyPump can. Writing software that uses it is also a great way of contributing!

What's the next big thing for PyPump?

The next big thing is our 0.6 release, in which we're aiming to provide much better documentation, better storage interaction, and a much more stable API to write against. I think some of the most exciting things won't be what we add to PyPump, but, rather, what other developers will create with it. I'm really looking forward to it being used in more ways.

Enjoyed this interview? Check out our previous entry in this series featuring Alan Reiner of Bitcoin Armory.

Categories: FLOSS Project Planets

Interview with Jessica Tallon of PyPump

FSF Blogs - Thu, 2014-10-30 15:58

In this edition, we conducted an email-based interview with Jessica Tallon, the lead developer PyPump, a simple but powerful and pythonic way of interfacing with the pump.io API, which is licensed under the terms of the GNU General Public License version 3 or, at your option, any later version (GPLv3+).

What inspired you to create PyPump?

I began working on PyPump when Evan Prodromou launched the first pump.io servers. Although Status.net had existed before pump.io, I wasn't a user and the only social networks I used were centralized, proprietary ones which really clashed with my views on software freedom and the federated web. I wanted to be able to interact with pump without having to use a browser. The API was easy to understand, so I tried to see if I could put together a basic library.

How are people using it?

There are several interesting projects out there which use PyPump. With my day job as a GNU MediaGoblin developer, we're going to be using it as a way of communicating between servers as a part of our federation effort. A great use I've seen is PumpMigrate, which will migrate one pump.io account to another. Another little utility that I wrote over the course of a weekend is p, which was made to be an easy way of making a quick post, bulk uploading photos, or anything you can script with the shell.

What features do you think really sets PyPump apart from similar software?

One thing PyPump does particularly well is being pythonic. We've written PyPump to be as natural for the python developer to work with as possible. Hopefully, that will lower the bar of entry for developers, as they won't have to read through the pump.io API documentation or be intimately familiar with Activity Streams in order to write great applications that can interact with pump.io.

Why did you choose the GNU GPLv3+ as PyPump's license?

This is actually something I get asked a lot and something which I have spent a lot of time thinking about. PyPump is a library and most people expected me to release it under the GNU LGPLv3. The reason I went with the GNU GPLv3 is that I believe that all software, regardless of size, should be free - so that we can all learn, build, fix, and use the software in whatever way we see fit. GPLv3 gives everyone the protection against someone coming along and using all that great work and writing proprietary code against it.

How can users (technical or otherwise) help contribute to PyPump?

There is so much people can do to help. We would love to get help both on the library, as we're currently working towards our 0.6 release, as well as on documentation. With PyPump being a library, we want to make sure that we have accessible, good quality documentation so that the people who want to use PyPump can. Writing software that uses it is also a great way of contributing!

What's the next big thing for PyPump?

The next big thing is our 0.6 release, in which we're aiming to provide much better documentation, better storage interaction, and a much more stable API to write against. I think some of the most exciting things won't be what we add to PyPump, but, rather, what other developers will create with it. I'm really looking forward to it being used in more ways.

Enjoyed this interview? Check out our previous entry in this series featuring Alan Reiner of Bitcoin Armory.

Categories: FLOSS Project Planets

Mediacurrent: Why You Should Speak at Tech Conferences

Planet Drupal - Thu, 2014-10-30 15:50

The first time I spoke at a tech conference was about five years ago at the University of Southern California (USC), in Los Angeles. It was at an annual conference called Code Camp whose audience is mostly Microsoft developers. I didn’t know what to expect in that kind of setting. I selected a topic I was fairly comfortable with, Designing with CSS3. Not only was the topic well received but it quickly became the most popular session in the conference with over 140 attendees interested in it. Now I was really freaking out.

Categories: FLOSS Project Planets

Chromatic: Easily Upgrade Your Image Fields for Retina!

Planet Drupal - Thu, 2014-10-30 14:28

Drupal makes it so easy to add image fields to your content types. Fields in core for the win! With image styles in core, its as easy as ever to create multiple image sizes for display in various contexts (thumbnails, full, etc.). But what about providing hi-resolution versions of your rasterized images for retina displays? Out of the box, you don’t really have a lot of good options. You could simply upload high resolution versions and force your users, regardless of display type to download massive file versions, but that’s not exactly the best for performance. You could use some custom field theming and roll your own implementation of the <picture> element, but browser support is basically null at this point. That won’t do. You could do what Apple does and force the browser to download 1x versions of your images then use javascript to detect high resolution displays and then force the browser to download all of the high resolution versions…I think you see my point.

What if you could create hi-resolution versions of these images without a ton of added filesize overhead? What if you could do this all within Drupal? No special coding, no uploading of multiple versions, no special field templates or unnecessary javascript. Just a basic Drupal image field with a special image style defined.

Here’s how you do it:
  1. Create your image field. (In most cases, you’ve probably already got this.)
  2. Download and install the HiRes Images module This module allows you to create an image style at 2x the desired pixel dimensions. If your desired maximum image width is 720 css pixels, your output image would be saved at 1440px.
  3. Download and install the Image Style Quality module This nifty module allows you to define specific image qualities on a per image style basis instead of using Drupal’s global image quality setting.
  4. Add a new image style (or alter an existing)
  5. Add your normal image style presets, like scale, crop etc. If you’re scaling, set your scale to be 2x your desired maximum output in pixels. So if you want an output of 720, set your scale to 1440px.
  6. Add the “Hi-Res (x2)”" effect. This will output you’re image element at half the scale amount above. So we get a max of 720px.
  7. Add the “Quality” effect and set it to something like 60%. This may take some experimenting to find a balance between image quality and file size. In my example, I went with 60% compression. This yielded an image that was still really sharp and a reasonable file size.
  8. Set your display mode to use this new (or altered) image style.
  9. Enjoy your beautiful, high resolution, performant image fields!

Hard to believe this works right? You’d think your retina version would look really crappy with that much compression, but it doesn’t. In fact, in some cases it will look just as sharp and be smaller than a 1x counterpart. See my screenshots below for proof:

Side-by-side comparison:

Network panel output:

So we end up with a high resolution version of our uploaded image that is actually smaller than the original version at 720px! Looks great on retina devices and doesn’t badly penalize users of standard definition displays. WIN!

For a detailed explanation of this technique in broader terms, see Retina Revolution by Daan Jobsis

Categories: FLOSS Project Planets

PyCharm: JetBrains Debuts PyCharm Educational Edition

Planet Python - Thu, 2014-10-30 13:34

If you’ve ever wanted to learn Python programming, get ready to be blown away.

Today we’re launching the free and open source PyCharm Educational Edition for students and teachers. This easy-to-use yet powerful Python IDE includes special interactive educational functionality to help novice programmers learn the craft and turn professional quicker than ever before! It combines the easy learning curve of interactive coding platforms with the power of a real-world professional tool.

Why PyCharm Educational Edition?

We all know that computer programming studies are one of today’s major global trends, driven by open-access, non-credit education. Python has long been used for educational purposes and is now the most popular language used to teach programming for beginners. We decided to create this new educational IDE, because we at JetBrains PyCharm, being a part of the Python community, are committed to providing quality, professional, seamless solutions for learning programming with Python, keeping the needs of both novice programmers and educators in mind.

What is so special about PyCharm Educational Edition?

In designing this tool we have been inspired by Guido van Rossum, the creator of the Python programming language, who said:

“We believe that there should be no clear-cut distinction between tools used by professionals and tools used for education—just as professional writers use the same language and alphabet as their readers!” https://www.python.org/doc/essays/cp4e/

PyCharm is a professional tool recognized among professionals all around the globe. At some point it occurred to us that, with some work, its power could also be harnessed to serve educational purposes.

We analyzed educational trends and tools on the market carefully. To understand what should be improved in PyCharm and how to make it the best educational IDE possible, we polled hundreds of instructors from different universities all around the world.


We found out that there are two opposite approaches to learning programming. One is based on using interactive online educational platforms and editors, which are extremely easy to start with. Despite an easy learning curve, these are not real development tools, and once you get used to them it may be difficult to switch to a real development environment and develop something real. The other approach is centered around real code editors and IDEs tools. While advanced, they are often too complex for beginners. Instead of learning programming, you must invest considerable efforts and time just into understanding how the tool works, before actually learning the essentials of programming.

PyCharm Educational Edition aims to combine both these two worlds. We’ve made it easy to get started with, not intimidating, yet powerful enough to guide you all the way through to becoming a professional developer.

All the learning you need, for FREE

PyCharm Educational Edition is absolutely free and open-source. Novice programmers can download and use it for educational or any other purposes—for free. Instructors and course authors can use it to create, modify and share their own courses.

Included are learning essentials like an integrated Python console, Debugger and VCS, along with unique educational features such as “fill in the missing code” exercises, intelligent hints, checks, smart suggestions, code auto-completion, and much more.

So, what’s inside, besides the PyCharm Community Edition?

  • Special new Educational project type. From a student’s point of view, an Educational project is like an interactive course that includes tasks and files for editing, and a Check button that gives instant feedback and scores your assignment. With this type of project, teachers can create courses or assignments with lessons and tasks, create exercise code, define expected results, write tests that will work in the background. In particular, they can employ the “fill in the missing code” educational technique where you ask a student to insert the correct code in an already existing code sample.
  • A greatly simplified interface to make the learning curve as easy as possible. The advanced tools are hidden by default and may be activated as you progress.
  • On Windows, Python is installed together with PyCharm, with no additional installation required. Linux and Mac OS installers automatically detect a system interpreter. All you need to start learning is just to install PyCharm.

Possible Applications

PyCharm Educational Edition can be used in MOOCs, self-studying courses or traditional programming courses. In addition to going through interactive courses, you can also use normal Python projects and the integrated Python console, as well as the debugger, VCS, and everything else that PyCharm already offers.

What to do next?

Don’t wait any longer — download PyCharm Education Edition for your platform and start learning Python programming today!

For more details and learning materials, visit the PyCharm Educational Edition website and check the Quick Start guide to get rolling. Or, for a quick visual overview, watch this introductory video:

Then, get involved:

Read our blog to stay tuned for news, updates and everything that goes on with PyCharm Educational Edition. And do give us feedback on how we’re doing.

Did you know?
JetBrains recently launched the Free Student License program. With this program any student or educator can use any JetBrains product for free!

Develop with pleasure!
JetBrains PyCharm Team

Categories: FLOSS Project Planets

Midwestern Mac, LLC: How to set complex string variables with Drush vset

Planet Drupal - Thu, 2014-10-30 13:26

I recently ran into an issue where drush vset was not setting a string variable (in this case, a time period that would be used in strtotime()) correctly:

# Didn't work:
$ drush vset custom_past_time '-1 day'
Unknown options: --0, --w, --e, --k.  See `drush help variable-set`      [error]
for available options. To suppress this error, add the option
--strict=0.

Using the --strict=0 option resulted in the variable being set to a value of "1".

After scratching my head a bit, trying different ways of escaping the string value, using single and double quotes, etc., I finally realized I could just use variable_set() with drush's php-eval command (shortcut ev):

Categories: FLOSS Project Planets

Chris Lamb: Are you building an internet fridge?

Planet Debian - Thu, 2014-10-30 13:00

Mikkel Rasmussen:

If you look at the idea of "The Kitchen of Tomorrow" as IKEA thought about it is the core idea is that cooking is slavery.

It's the idea that technology can free us from making food. It can do it for us. It can recognise who we are, we don't have to be tied to the kitchen all day, we don't have to think about it.

Now if you're an anthropologist, they would tell you that cooking is perhaps one of the most complicated things you can think about when it comes to the human condition. If you think about your own cooking habits they probably come from your childhood, the nation you're from, the region you're from. It takes a lot of skill to cook. It's not so easy.

And actually, it's quite fun to cook. there's also a lot of improvisation. I don't know if you ever tried to come home to a fridge and you just look into the fridge: oh, there's a carrot and some milk and some white wine and you figure it out. That's what cooking is like – it's a very human thing to do.

The physical version of your smart recipe site?


Therefore, if you think about it, having anything that automates this for you or decides for you or improvises for you is actually not doing anything to help you with what you want to do, which is that it's nice to cook.

More generally, if you make technology—for example—that has at its core the idea that cooking is slavery and that idea is wrong, then your technology will fail. Not because of the technology, but because it simply gets people wrong.

This happens all the time. You cannot swing a cat these days without hitting one of those refrigerator companies that make smart fridges. I don't know you've ever seen them, like a "intelligent fridge". There's so many of them that there is actually a website called "Fuck your internet fridge" by a guy who tracks failed prototypes on intelligent fridges.

Why? Because the idea is wrong. Not the technology, but the idea about who we are - that we do not want the kitchen to be automated for us.

We want to cook. We want Japanese knives. We want complicated cooking. And so what we are saying here is not that technology is wrong as such. It's just you need to base it—especially when you are innovating really big ideas—on something that's a true human insight. And cooking as slavery is not a true human insight and therefore the prototypes will fail.

(I hereby nominate "internet fridge" as the term to describe products or ideas that—whilst technologically sound—is based on fundamentally flawed anthropology.)

Hearing "I hate X" and thinking that simply removing X will provide real value to your users is short-sighted, especially when you don't really understand why humans are doing X in the first place.

Categories: FLOSS Project Planets

Jonathan Brown: Update on Drupal / Bitcoin Payment Protocol (BIP 70) integration

Planet Drupal - Thu, 2014-10-30 12:03

BIP 70 describes a high-level payment system for Bitcoin. It uses Protocol Buffers and X.509 certificates for the following major improvements:

  • Human-readable payment destinations instead of Bitcoin addresses
  • Resistance from man-in-the-middle attacks
  • Payment received messages sent back to the wallet
  • Refund addresses

I compiled the BIP 70 Protocol Buffers definition file into PHP using ProtobufPHP.

I have implemented most of BIP 70 in the Coin Tools Drupal project. It contains a new Bitcoin payment entity class that contains all the specified fields in its base table. Bundles can be created to add additional fields to payments.

Payments can currently be created through an admin interface, although this would typically happen in an automated process on a real website.

When viewing an unfulfilled payment in the admin interface the QR code for the payment will be present. It decodes to a backward-compatible payment protocol URI.



Currently the module is unable to detect Bitcoin payments not sent using the payment protocol, i.e. the payment is sent to the address but the website is not notified. This will be quite easy to implement though.

For payments made using the new protocol, Coin Tools is able to complete the transaction and has been tested with both the original QT client and Andreas Schildbach's Android Wallet. Interestingly Andreas's wallet does not display the status message returned by the merchant.

The specification does not seem to have any method for the merchant to inform the app that the payment was not satisfactory, other than setting the human readable status message (the wallet would not know there was a problem), or returning an HTTP error code (resulting in unpleasant error message for customer).

Coin Tools will check the transactions provided by the wallet are sending enough bitcoins to the payment address. It then broadcasts the transactions via bitcoind. Currently Coin Tools is relying on bitcoind rejecting transactions that have not been signed correctly. This assumption needs to be verified.

When the payment protocol QR code is displayed, Coin Tools enables a small Javascript program to poll the website to determine if the payment has been made, reloading the page once this has happened. Ideally this would be implemented as a long-running AJAX request.

The X.509 certificate part of the payment protocol specification has not yet been implemented in Coin Tools. This is a critical component.

The implementation of the payment protocol in Coin Tools only permits a single Bitcoin address per payment. The specification does support having more than one and in theory this could be used to increase payment anonymity by each address only being spent into by a single output in a single transaction. In practise this is not so effective as all the transactions would be broadcast simultaneously.

Coin Tools will also store a single refund address provided by the wallet making the payment. The wallet actually provides payment scripts, but Coin Tools will determine if the script is a standard payment and extract the address. Multiple refund addresses are also supported by the standard, but Coin Tools will only store one.

According to the specification the wallet can allow the customer to provide a note to the vendor. Coin Tools will store this note, however I do not know of any wallets that support this feature.

The HTTP responses for PaymentRequest and Payment need to be implemented as Symfony response handlers. Currently they are implemented in a simplistic manner setting their own HTTP headers and and calling exit().

It is currently only possible to make payments from the admin interface. A template needs to be provided so the payments can be made from elsewhere on a website, e.g. integration with Drupal Commerce.

For a standard ecommerce website that wants to accept bitcoins it may make more sense to use a provider such as BitPay or Coinbase. Accepting payments natively on a website means that a hacker could steal funds. One solution to this problem would be to use Hierarchical Deterministic Wallets so that private keys are only stored on backend systems.

For a project that is doing something more interesting than just accepting Bitcoin as a payment method and is already running bitcoind, it may be advantageous to have a native implementation on BIP 70 on the website rather than relying on a third-party provider.

No tests have yet been written for Coin Tools. It is essential that Payment and PaymentRequest routes are fully tested including the edge cases defined in the specification.

A few limitations of Drupal 8 have been encountered during the creation of this functionality. In Drupal 8 it is now possible to have fields in entity base tables. This is really great, but unfortunately when these fields are present in a view it is not possible to use their formatters. I discussed this with Daniel Wehner at Amsterdam and he didn't seem very optimistic about this being able to be fixed so some sort of workaround will need to be found as this functionality is critical to the module.

Date field is now in D8 core, but unfortunately it stores the date as a varchar in the database. This means that it is not possible to sort or filter on date - a major limitation. If core is not changed to use database-native date storage Coin Tools will have to use another date field.

The Payment Protocol functionality needs to be backported to Drupal 7 Coin Tools and integrated with Payment / Commerce.

Categories: FLOSS Project Planets

S1L: Selling Organic Groups with Drupal Commerce License OG

Planet Drupal - Thu, 2014-10-30 11:58

Selling Organic Groups with Drupal Commerce just got way more powerful. Actually it did so a while ago when Commerce License and Commerce License OG where created.

About 18 months ago I wrote about how you could sell access to Organic Groups with Drupal Commerce with a configuration of fields and Rules.

With Commerce License and Commerce License OG selling access to Organic Groups you have a setup that is as easy to setup than the 'old' field+Rules way (if not easier) and you'll have great new functionality for revoking membership access.

Step by Step instructions

You can find the step-by-step instruction on how to sell your Organic groups with Drupal Commerce based on Commerce Licenses at https://www.drupal.org/node/2366023. Just follow the 8 easy steps and you'll have it setup in no-time.

How does it work?

Basically you'll be selling licenses to your Organic Group (content). These licenses can expire, or be forever. You can configure them the way you see fit. The license determines if a user has access to the Organic Group or not.

Commerce License is a framework for selling access to local or remote resources.

Read more about Commerce Licenses at https://www.drupal.org/node/2039687 under Basic Concepts -> License.

Show me

If you follow the 8 steps in the instruction at https://www.drupal.org/node/2366023 you'll see that you can easily configure the products like this:

and users on the site will be given licenses like this

 

From Dev to Stable

commerce_license_og module is currently in dev state. It works fine for the most common usecase: users buying access to your site. However make sure it works the way you want it before you decide go 'all in' implementing this on a production site.

Currently there seems to be an issue with granting anonymous users access to Organic Groups (https://www.drupal.org/node/2366155). 

Please add your input to https://www.drupal.org/project/issues/commerce_license_og to help developing this module to a stable release.

Category: Drupal Planet Drupal Commerce Drupal Organic Groups
Categories: FLOSS Project Planets

Drupal Watchdog: RESTfulness and Web Services

Planet Drupal - Thu, 2014-10-30 11:34
Feature

One of the most anticipated features in Drupal 8 is the integration of RESTful Web Services in Drupal core. Drupal devs are looking forward to being able to do things with core which they couldn't before, such as:

  • Offering their site’s data up in an API for others to use and contribute to;
  • Building better user interactions by adding and updating content in place instead of a full page submission;
  • Developing iPhone and Android apps that can serve and manage content hosted in a Drupal site.

But what are RESTful Web Services? In this article, I will walk you through the different conceptions of what is RESTful and explain how the new modules in Drupal core address these different concepts.

A Quick History of REST

Many developers have become aware of REST due to the rising popularity of APIs. These APIs enable developers to build on top of services such as Twitter and Netflix, and many of these APIs call themselves RESTful. Yet these APIs often work in extremely different ways. This is because there are many definitions of what it means to be RESTful, some more orthodox and others more popular.
The term REST was coined by Roy Fielding, one of the people working on one of the earliest Web standards, HTTP. He coined the term as a description of the general architecture of the Web and systems like it. Since the time he laid out the constraints of a RESTful system in his thesis, some parts have caught hold in developer communities, while others have only found small – but vocal – communities of advocates.

For a good explanation of the different levels of RESTful-ness, see Martin Fowler’s explanation of the Richardson Maturity Model.

What is RESTful?

So what are the requirements for RESTfulness?

Categories: FLOSS Project Planets

Machinalis: IEPY 0.9 was released

Planet Python - Thu, 2014-10-30 11:28

We're happy to announce that IEPY 0.9 was released!! It's an open source tool for Information Extraction focused on Relation Extraction.
It’s aimed at:
  • users needing to perform Information Extraction on a large dataset.
  • scientists wanting to experiment with new IE algorithms.

To give an example of Relation Extraction, if we are trying to find a birth date in:

“John von Neumann (December 28, 1903 – February 8, 1957) was a Hungarian and American pure and applied mathematician, physicist, inventor and polymath.”

Then IEPY’s task is to identify “John von Neumann” and “December 28, 1903” as the subject and object entities of the “was born in” relation.

Features Documentation: http://iepy.readthedocs.org/en/0.9.0/
Github: https://github.com/machinalis/iepy PyPi: https://pypi.python.org/pypi/iepy twitter:@machinalis
Categories: FLOSS Project Planets

LightSky: Drupal Press Shouldn't be Bad

Planet Drupal - Thu, 2014-10-30 11:09

This has been an interesting couple of weeks for Drupal, and that platform as a whole has received a lot of press.  With the release of Drupal 7.32, a major (I use this term lightly) security vulnerability was corrected.  Drupal then announced this week that, despite there being no significant evidence of a large number of sites attacked, any site that wasn't patched within a 7 hours of the patch release should consider itself compromised.  Hosts were reporting automated attacks beginning only hours after the patch announcement.  The vulnerability was unprecedented for the Drupal community, but really it shows why Drupal is great, and isn't a black mark on Drupal in our eyes.

First lets look at the announcement by the Drupal Security Team this week, where they say that sites were beginning to be attacked within hours of the patch announcement.  The biggest thing to take from this announcement is the words Drupal Security Team.  Yep, Drupal has one.  I did a search this morning using the following criteria "<popular CMS> security team", and I found the results quite interesting.  When I added Drupal as the "popular CMS" I got a page full of Drupal Security team information, policies and procedures.  For every other CMS I tried, I got nothing about a team of security people, but a lot of information stating that they are secure and if you find a problem here is how to report it.  Drupal focuses on security, and the Security Team at Drupal is a prime example of how important this really is to the Drupal community.

The second thing to take away from this is that the patch really notified the world that there was a vulnerability, and there is no way to stop this from happening.  We didn't have any mass attacks on Drupal sites prior to this release, and the damage here after the release seems to be primarily related to those who chose not to apply the updates as they were instructed to.  This really emphasizes the importance of applying available updates.  Sites where the update was applied quickly likely did not experience any negative effects of the vulnerability, and if they did it was very limited.  Updates to Drupal are certainly optional, but they are necessary to avoid headaches down the road, and this is proof of exactly why.  

So don't be discouraged by all of the bad looking press related to this.  I still stand by the idea that Drupal is the most secure platform available, but it is only as secure as you allow it to be.  If you aren't applying the updates as they are available, you are likely putting your self at risk to have your site compromised.  The big difference I see between Drupal and the other CMS options is that Drupal works diligently to fix module and core vulnerabilities as a habit.  Many others aren't as diligent.

For more tips like these, follow us on social media or subscribe for free to our RSS feed and newsletter. You can also contact us directly or request a consultation
Categories: FLOSS Project Planets

Bad Voltage Season 1 Episode 28: Everything is Orange

LinuxPlanet - Thu, 2014-10-30 10:27

Bryan Lunduke, Jono Bacon, Stuart Langridge and myself present Bad Voltage, in which we celebrate our completed first year of the show by not actually doing anything celebratory. We also discuss:

  • Debian agreed to ship systemd as default and now people are talking about forking the whole distribution. The question is: at what point is it right to fork a distro? (2.45)
  • Bryan reviews ChromeOS on the Chromebook Pixel and explains how someone who doesn’t like requiring an internet connection deals with a laptop which does (16.27)
  • Wrong in 60 Seconds: the first of a new regular feature where one of us steps onto the soapbox for one minute. For this inaugural Wrong in 60 Seconds, Stuart talks about choice (32.58)
  • We speak to Guy Martin, senior open source strategist in Samsung’s open source group, about what open source means to Samsung and what it’s like influencing things inside such a huge organisation (34.32)
  • Technology is increasingly being used to help connect people after recent or alert you of upcoming natural disasters or extreme weather conditions. We look at the existing approaches and suggest some new ones. (50.59)

Listen to 1×28: Everything is Orange

As mentioned here, Bad Voltage is a new project I’m proud to be a part of. From the Bad Voltage site: Every two weeks Bad Voltage delivers an amusing take on technology, Open Source, politics, music, and anything else we think is interesting, as well as interviews and reviews. Do note that Bad Voltage is in no way related to LinuxQuestions.org, and unlike LQ it will be decidedly NSFW. That said, head over to the Bad Voltage website, take a listen and let us know what you think.

–jeremy


Categories: FLOSS Project Planets

Matthew Garrett: Hacker News metrics (first rough approach)

Planet Debian - Thu, 2014-10-30 10:19
I'm not a huge fan of Hacker News[1]. My impression continues to be that it ends up promoting stories that align with the Silicon Valley narrative of meritocracy, technology will fix everything, regulation is the cancer killing agile startups, and discouraging stories that suggest that the world of technology is, broadly speaking, awful and we should all be ashamed of ourselves.

But as a good data-driven person[2], wouldn't it be nice to have numbers rather than just handwaving? In the absence of a good public dataset, I scraped Hacker Slide to get just over two months of data in the form of hourly snapshots of stories, their age, their score and their position. I then applied a trivial test:
  1. If the story is younger than any other story
  2. and the story has a higher score than that other story
  3. and the story has a worse ranking than that other story
  4. and at least one of these two stories is on the front page
then the story is considered to have been penalised.

(note: "penalised" can have several meanings. It may be due to explicit flagging, or it may be due to an automated system deciding that the story is controversial or appears to be supported by a voting ring. There may be other reasons. I haven't attempted to separate them, because for my purposes it doesn't matter. The algorithm is discussed here.)

Now, ideally I'd classify my dataset based on manual analysis and classification of stories, but I'm lazy (see [2]) and so just tried some keyword analysis:
KeywordPenalisedUnpenalisedWomen134Harass20Female51Intel23x8634ARM34Airplane12Startup4626
A few things to note:
  1. Lots of stories are penalised. Of the front page stories in my dataset, I count 3240 stories that have some kind of penalty applied, against 2848 that don't. The default seems to be that some kind of detection will kick in.
  2. Stories containing keywords that suggest they refer to issues around social justice appear more likely to be penalised than stories that refer to technical matters
  3. There are other topics that are also disproportionately likely to be penalised. That's interesting, but not really relevant - I'm not necessarily arguing that social issues are penalised out of an active desire to make them go away, merely that the existing ranking system tends to result in it happening anyway.

This clearly isn't an especially rigorous analysis, and in future I hope to do a better job. But for now the evidence appears consistent with my innate prejudice - the Hacker News ranking algorithm tends to penalise stories that address social issues. An interesting next step would be to attempt to infer whether the reasons for the penalties are similar between different categories of penalised stories[3], but I'm not sure how practical that is with the publicly available data.

(Raw data is here, penalised stories are here, unpenalised stories are here)


[1] Moving to San Francisco has resulted in it making more sense, but really that just makes me even more depressed.
[2] Ha ha like fuck my PhD's in biology
[3] Perhaps stories about startups tend to get penalised because of voter ring detection from people trying to promote their startup, while stories about social issues tend to get penalised because of controversy detection?

comments
Categories: FLOSS Project Planets

Matt Raible: Getting Started with JHipster on OS X

Planet Apache - Thu, 2014-10-30 09:30

Last week I was tasked with developing a quick prototype that used AngularJS for its client and Spring MVC for its server. A colleague developed the same application using Backbone.js and Spring MVC. At first, I considered using my boot-ionic project as a starting point. Then I realized I didn't need to develop a native mobile app, but rather a responsive web app.

My colleague mentioned he was going to use RESThub as his starting point, so I figured I'd use JHipster as mine. We allocated a day to get our environments setup with the tools we needed, then timeboxed our first feature spike to four hours.

My first experience with JHipster failed the 10-minute test. I spent a lot of time flailing about with various "npm" and "yo" commands, getting permissions issues along the way. After getting thinks to work with some sudo action, I figured I'd try its Docker development environment. This experience was no better.

JHipster seems like a nice project, so I figured I'd try to find the causes of my issues. This article is designed to save you the pain I had. If you'd rather just see the steps to get up and running quickly, skip to the summary.

The "npm" and "yo" issues I had seemed to be caused by a bad node/npm installation. To fix this, I removed node and installed nvm. Here's the commands I needed to remove node and npm:

sudo rm -rf /usr/local/lib/node_modules sudo rm -rf /usr/local/include/node sudo rm /usr/local/bin/node sudo rm -rf /usr/local/bin/npm sudo rm /usr/local/share/man/man1/node.1 sudo rm -rf /usr/local/lib/dtrace/node.d sudo rm -rf ~/.npm

Next, I ran "brew doctor" to make sure Homebrew was still happy. It told me some things were broken:

$ brew doctor Warning: Broken symlinks were found. Remove them with `brew prune`: /usr/local/bin/yo /usr/local/bin/ionic /usr/local/bin/grunt /usr/local/bin/bower

I ran brew update && brew prune, followed by brew install nvm. Next, I added the following to my ~/.profile:

source $(brew --prefix nvm)/nvm.sh

To install the latest version of node, I ran the commands below and set the latest version as the default:

nvm ls-remote nvm install v0.11.13 nvm alias default v0.11.13

Once I had a fresh version of Node.js, I was able to run JHipster's local installation instructions.

npm install -g yo npm install -g generator-jhipster

Then I created my project:

yo jhipster

I was disappointed to find this created all the project files in my current directory, rather than in a subdirectory. I'd recommend you do the following instead:

mkdir ~/projectname && cd ~/projectname && yo jhipster

Before creating your project, JHipster asks you a number of questions. To see what they are, see its documentation on creating an application. Two things to be aware of:

In other words, I'd recommend using Java 7 + (cookie-based authentication with websockets) or (oauth2 authentication w/o websockets).

After creating my project, I was able to run it using "mvn spring-boot:run" and view it at http://localhost:8080. To get hot-reloading for the client, I ran "grunt server" and opened my browser to http://localhost:9000.

JHipster + Docker on OS X

I had no luck getting the Docker instructions to work initially. I spent a couple hours on it, then gave up. A couple of days ago, I decided to give it another good ol' college-try. To make sure I figured out everything from scratch, I started by removing Docker.

I re-installed Docker and pulled the JHipster image using the following:

sudo docker pull jdubois/jhipster-docker

The error I got from this was the following:

2014/09/05 19:43:38 Post http:///var/run/docker.sock/images/create?fromImage=jdubois%2Fjhipster-docker&tag=: dial unix /var/run/docker.sock: no such file or directory

After doing some research, I learned I needed to run boot2docker init first. Next I ran boot2docker up to start the Docker daemon. Then I copied/pasted "export DOCKER_HOST=tcp://192.168.59.103:2375" into my console and tried to run docker pull again.

It failed with the same error. The solution was simpler than you might think: don't use sudo.

$ docker pull jdubois/jhipster-docker Pulling repository jdubois/jhipster-docker 01bdc74025db: Pulling dependent layers 511136ea3c5a: Download complete ...

The next command that JHipster's documentation recommends is to run the Docker image, forward ports and share folders. When you run it, the terminal seems to hang and trying to ssh into it doesn't work. Others have recently reported a similar issue. I discovered the hanging is caused by a missing "-d" parameter and ssh doesn't work because you need to add a portmap to the VM to expose the port to your host. You can fix this by running the following:

boot2docker down VBoxManage modifyvm "boot2docker-vm" --natpf1 "containerssh,tcp,,4022,,4022" VBoxManage modifyvm "boot2docker-vm" --natpf1 "containertomcat,tcp,,8080,,8080" VBoxManage modifyvm "boot2docker-vm" --natpf1 "containergruntserver,tcp,,9000,,9000" VBoxManage modifyvm "boot2docker-vm" --natpf1 "containergruntreload,tcp,,35729,,35729" boot2docker start

After making these changes, I was able to start the image and ssh into it.

docker run -d -v ~/jhipster:/jhipster -p 8080:8080 -p 9000:9000 -p 35729:35729 -p 4022:22 -t jdubois/jhipster-docker ssh -p 4022 jhipster@localhost

I tried creating a new project within the VM (cd /jhipster && yo jhipster), but it failed with the following error:

/usr/lib/node_modules/generator-jhipster/node_modules/yeoman-generator/node_modules/mkdirp/index.js:89 throw err0; ^ Error: EACCES, permission denied '/jhipster/src'

The fix was giving the "jhipster" user ownership of the directory.

sudo chown jhipster /jhipster

After doing this, I was able to generate an app and run it using "mvn spring-boot:run" and access it from my Mac at http://localhost:8080. I was also able to run "grunt server" and see it at http://localhost:9000

However, I was puzzled to see that there was nothing in my ~/jhipster directory. After doing some searching, I found that the docker run -v /host/path:/container/path doesn't work on OS X.

David Gageot's A Better Boot2Docker on OSX led me to svendowideit/samba, which solved this problem. The specifics are documented in boot2docker's folder sharing section.

I shutdown my docker container by running "docker ps", grabbing the first two characters of the id and then running:

docker stop [2chars]

I started the JHipster container without the -v parameter, used "docker ps" to find its name (backstabbing_galileo in this case), then used that to add samba support.

docker run -d -p 8080:8080 -p 9000:9000 -p 35729:35729 -p 4022:22 -t jdubois/jhipster-docker docker run --rm -v /usr/local/bin/docker:/docker -v /var/run/docker.sock:/docker.sock svendowideit/samba backstabbing_galileo

Then I was able to connect using Finder > Go > Connect to Server, using the following for the server address:

cifs://192.168.59.103/jhipster

To make this volume appear in my regular development area, I created a symlink:

ln -s /Volumes/jhipster ~/dev/jhipster

After doing this, all the files were marked as read-only. To fix, I ran "chmod -R 777 ." in the directory on the server. I noticed that this also worked if I ran it from my Mac's terminal, but it took quite a while to traverse all the files. I noticed a similar delay when loading the project into IntelliJ.

Summary

Phew! That's a lot of information that can be condensed down into four JHipster + Docker on OS X tips.

  1. Make sure your npm installation doesn't require sudo rights. If it does, reinstall using nvm.
  2. Add portmaps to your VM to expose ports 4022, 8080, 9000 and 35729 to your host.
  3. Change ownership on the /jhipster in the Docker image: sudo chown jhipster /jhipster.
  4. Use svendowideit/samba to share your VM's directories with OS X.
Categories: FLOSS Project Planets
Syndicate content