Planet Apache

Syndicate content
Updated: 1 hour 56 min ago

Edward J. Yoon: App Store keeps trying to update iMovie

Mon, 2014-08-18 23:12

If Spotlight is disabled, App Store fails to notify you of updates and attempts to continuously update the apps.  To fix this problem, enable the Spotlight and re-index Application folder[1].

% sudo launchctl load -w /System/Library/LaunchDaemons/com.apple.metadata.mds.plist
1. http://support.apple.com/kb/ht2409
Categories: FLOSS Project Planets

Chris Hostetter: Important Security Notice: Parsing Untrusted Microsoft Office Files With “Solr Cell”

Mon, 2014-08-18 19:17

The Lucene PMC has issued an important security announcement regarding how some Solr users may be vulnerable to recently fixed exploits (CVE-2014-3529 and CVE-2014-3574) in Apache POI — An open source library for parsing Microsoft file formats.

These exploits may impact any Solr users who enable the ExtractingRequestHandler (aka: “Solr Cell“) to parse files from untrusted sources. A maliciously crafted OpenXML file could consume excessive computing resources resulting in a DoS attack, or expose sensitive details in files accessible to the effective runtime user of the Solr server.

“Hot Fix” instructions to upgrade the affected Apache POI Jar files in Solr 4.8.0, 4.8.1, and 4.9.0 have been posted on the Solr website. A new version of Solr will be released in the next few weeks including the fixed jars as well.

Full details from the Lucene PMC announcement email

Date: Tue, 19 Aug 2014 01:33:55 +0200 Subject: [ANNOUNCE] [SECURITY] Recommendation to update Apache POI in Apache Solr 4.8.0, 4.8.1, and 4.9.0 installations Hallo Apache Solr Users, the Apache Lucene PMC wants to make the users of Solr aware of the following issue: Apache Solr versions 4.8.0, 4.8.1, 4.9.0 bundle Apache POI 3.10-beta2 with its binary release tarball. This version (and all previous ones) of Apache POI are vulnerable to the following issues: = CVE-2014-3529: XML External Entity (XXE) problem in Apache POI's OpenXML parser = Type: Information disclosure Description: Apache POI uses Java's XML components to parse OpenXML files produced by Microsoft Office products (DOCX, XLSX, PPTX,...). Applications that accept such files from end-users are vulnerable to XML External Entity (XXE) attacks, which allows remote attackers to bypass security restrictions and read arbitrary files via a crafted OpenXML document that provides an XML external entity declaration in conjunction with an entity reference. = CVE-2014-3574: XML Entity Expansion (XEE) problem in Apache POI's OpenXML parser = Type: Denial of service Description: Apache POI uses Java's XML components and Apache Xmlbeans to parse OpenXML files produced by Microsoft Office products (DOCX, XLSX, PPTX,...). Applications that accept such files from end-users are vulnerable to XML Entity Expansion (XEE) attacks ("XML bombs"), which allows remote hackers to consume large amounts of CPU resources. The Apache POI PMC released a bugfix version (3.10.1) today. Solr users are affected by these issues, if they enable the "Apache Solr Content Extraction Library (Solr Cell)" contrib module from the folder "contrib/extraction" of the release tarball. Users of Apache Solr are strongly advised to keep the module disabled if they don't use it. Alternatively, users of Apache Solr 4.8.0, 4.8.1, or 4.9.0 can update the affected libraries by replacing the vulnerable JAR files in the distribution folder. Users of previous versions have to update their Solr release first, patching older versions is impossible. To replace the vulnerable JAR files follow these steps: - Download the Apache POI 3.10.1 binary release: http://poi.apache.org/download.html#POI-3.10.1 - Unzip the archive - Delete the following files in your "solr-4.X.X/contrib/extraction/lib" folder: # poi-3.10-beta2.jar # poi-ooxml-3.10-beta2.jar # poi-ooxml-schemas-3.10-beta2.jar # poi-scratchpad-3.10-beta2.jar # xmlbeans-2.3.0.jar - Copy the following files from the base folder of the Apache POI distribution to the "solr-4.X.X/contrib/extraction/lib" folder: # poi-3.10.1-20140818.jar # poi-ooxml-3.10.1-20140818.jar # poi-ooxml-schemas-3.10.1-20140818.jar # poi-scratchpad-3.10.1-20140818.jar - Copy "xmlbeans-2.6.0.jar" from POI's "ooxml-lib/" folder to the "solr-4.X.X/contrib/extraction/lib" folder. - Verify that the "solr-4.X.X/contrib/extraction/lib" no longer contains any files with version number "3.10-beta2". - Verify that the folder contains one xmlbeans JAR file with version 2.6.0. If you just want to disable extraction of Microsoft Office documents, delete the files above and don't replace them. "Solr Cell" will automatically detect this and disable Microsoft Office document extraction. Coming versions of Apache Solr will have the updated libraries bundled. Happy Searching and Extracting, The Apache Lucene Developers PS: Thanks to Stefan Kopf, Mike Boufford, and Christian Schneider for reporting these issues!
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-08-18

Mon, 2014-08-18 18:58
  • Microservices – Not a free lunch! – High Scalability

    Some good reasons not to adopt microservices blindly. Testability and distributed-systems complexity are my biggest fears

    (tags: microservices soa devops architecture testing distcomp)

  • Richard Clayton – Failing at Microservices

    Solid warts-and-all confessional blogpost about a team failing to implement a microservices architecture. I’d put most of the blame on insufficient infrastructure to support them (at a code level), inter-personal team problems, and inexperience with large-scale complex multi-service production deployment and the work it was going to require

    (tags: microservices devops collaboration architecture fail team deployment soa)

  • Box Tech Blog » A Tale of Postmortems

    How Box introduced COE-style dev/ops outage postmortems, and got them working. This PIE metric sounds really useful to head off the dreaded “it’ll all have to come out missus” action item:

    The picture was getting clearer, and we decided to look into individual postmortems and action items and see what was missing. As it was, action items were wasting away with no owners. Digging deeper, we noticed that many action items entailed massive refactorings or vague requirements like “make system X better” (i.e. tasks that realistically were unlikely to be addressed). At a higher level, postmortem discussions often devolved into theoretical debates without a clear outcome. We needed a way to lower and focus the postmortem bar and a better way to categorize our action items and our technical debt. Out of this need, PIE (“Probability of recurrence * Impact of recurrence * Ease of addressing”) was born. By ranking each factor from 1 (“low”) to 5 (“high”), PIE provided us with two critical improvements: 1. A way to police our postmortems discussions. I.e. a low probability, low impact, hard to implement solution was unlikely to get prioritized and was better suited to a discussion outside the context of the postmortem. Using this ranking helped deflect almost all theoretical discussions. 2. A straightforward way to prioritize our action items. What’s better is that once we embraced PIE, we also applied it to existing tech debt work. This was critical because we could now prioritize postmortem action items alongside existing work. Postmortem action items became part of normal operations just like any other high-priority work.

    (tags: postmortems action-items outages ops devops pie metrics ranking refactoring prioritisation tech-debt)

  • NTP’s days are numbered for consumer devices

    An accurate clock is required to negotiate SSL/TLS, so clock sync is important for internet-of-things usage. but:

    Unfortunately for us, the traditional and most widespread method for clock synchronisation (NTP) has been caught up in a DDoS issue which has recently caused some ISPs to start blocking all NTP communication. [....] Because the DDoS attacks are so widespread, and the lack of obvious commercial pressure to fix the issue, it’s possible that the days of using NTP as a mechanism for setting clocks may well be numbered. Luckily for us there is a small but growing project that replaces it. tlsdate was started by Jacob Appelbaum of the Tor project in 2012, making use of the SSL handshake in order to extract time from a remote server, and its usage is on the rise. [....] Since we started encountering these problems, we’ve incorporated tlsdate into an over-the-air update, and have successfully started using this in situations where NTP is blocked.

    (tags: tlsdate ntp clocks time sync iot via:gwire ddos isps internet protocols security)

  • Cloudwash – Creating the Technical Prototype

    This is a lovely demo of integrating modern IoT connectivity functionality (remote app control, etc.) with a washing machine using Bergcloud’s hardware and backend, and a little logic-analyzer reverse engineering.

    (tags: arduino diy washing-machines iot bergcloud hacking reversing logic-analyzers hardware)

  • Systemd: Harbinger of the Linux apocalypse

    While there are many defensible aspects of Systemd, other aspects boggle the mind. Not the least of these was that, as of a few months ago, trying to debug the kernel from the boot line would cause the system to crash. This was because of Systemd’s voracious logging and the fact that Systemd responds to the “debug” flag on the kernel boot line — a flag meant for the kernel, not anything else. That, straight up, is a bug. However, the Systemd developers didn’t see it that way and actively fought with those experiencing the problem. Add the fact that one of the Systemd developers was banned by Linus Torvalds for poor attitude and bad design and another was responsible for causing significant issues with Linux audio support, but blamed the problem on everything else but his software, and you have a bad situation on your hands. There’s no shortage of egos in the open source development world. There’s no shortage of new ideas and veteran developers and administrators pooh-poohing something new simply because it’s new. But there are also 45 years of history behind Unix and extremely good reasons it’s still flourishing. Tools designed like Systemd do not fit the Linux mold, to their own detriment. Systemd’s design has more in common with Windows than with Unix — down to the binary logging. The link re systemd consuming the “debug” kernel boot arg is a canonical example of inflexible coders refusing to fix their own bugs. (via Jason Dixon)

    (tags: systemd linux red-hat egos linus-torvalds unix init booting debugging logging design software via:obfuscurity)

  • Inside a Chinese Bitcoin Mine

    The mining operation resides on an old, repurposed factory floor, and contains 2500 machines hashing away at 230 Gh/s, each. (That’s 230 billion calculations per second, per unit). [...] The operators told me that the power bill of this specific operation is in excess of ¥400,000 per month [..] about $60,000 USD.

    (tags: currency china economics bitcoin power environment green mining datacenters)

  • Moving Big Data into the Cloud with Tsunami UDP – AWS Big Data Blog

    Pretty serious speedup. 81 MB/sec with Tsunami UDP, compared to 9 MB/sec with plain old scp. Probably kills internet performance for everyone else though!

    (tags: tsunami-udp udp scp copying transfers internet long-distance performance speed)

  • The “sidecar” pattern

    Ha, great name. We use this (in the form of Smartstack).

    For what it is worth, we faced a similar challenge in earlier services (mostly due to existing C/C++ applications) and we created what was called a “sidecar”.  By sidecar, what I mean is a second process on each node/instance that did Cloud Service Fabric operations on behalf of the main process (the side-managed process).  Unfortunately those sidecars all went off and created one-offs for their particular service.  In this post, I’ll describe a more general sidecar that doesn’t force users to have these one-offs. Sidenote:  For those not familiar with sidecars, think of the motorcycle sidecar below.  Snoopy would be the main process with Woodstock being the sidecar process.  The main work on the instance would be the motorcycle (say serving your users’ REST requests).  The operational control is the sidecar (say serving health checks and management plane requests of the operational platform).

    (tags: netflix sidecars architecture patterns smartstack netflixoss microservices soa)

Categories: FLOSS Project Planets

James Duncan: A brief look at texting and the internet in film

Mon, 2014-08-18 17:00

Well into the digital age, film and television is still fairly bad at sorting out how to depicting the world we now live in. Sherlock and House of Cards do better than most. Tony Zhou dives in and dissects text bubbles, oversize phone fonts, and more.

via @ia via permalink
Categories: FLOSS Project Planets

Carlos Sanchez: Building Docker images with Puppet

Mon, 2014-08-18 10:15

Everybody should be building Docker images! but what if you don’t want to write all those shell scripts, which is basically what the Dockerfile is, a bunch of shell commands in RUN declarations; or if you are already using some Puppet modules to build VMs?

It is easy enough to build a new Docker image from Puppet manifests. For instance I have built this Jenkis slave Docker image, so here are the steps.

The Devops Israel team has built a number of Docker images on CentOS with Puppet preinstalled, so that is a good start.

FROM devopsil/puppet:3.5.1

Otherwise you can just install Puppet in any bare image using the normal installation instructions. Something to have into account is that Docker images are quite simple and may not have some needed packages installed. In this case the centos6 image didn’t have tar installed and some things failed to run. In some CentOS images the centosplus repo needs to be enabled for the installation to succeed.

FROM centos:centos6 RUN rpm --import https://yum.puppetlabs.com/RPM-GPG-KEY-puppetlabs && \ rpm -ivh http://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm # Need to enable centosplus for the image libselinux issue RUN yum install -y yum-utils RUN yum-config-manager --enable centosplus RUN yum install -y puppet tar

Once Puppet is installed we can apply any manifest to the server, we just need to put the right files in the right places. If we need extra modules we can copy them from the host, maybe using librarian-puppet to manage them. Note that I’m avoiding to run librarian or any tool in the image, as that would require installing extra packages that may not be needed at runtime.

ADD modules/ /etc/puppet/modules/

The main manifest can go anywhere but the default place is into /etc/puppet/manifests/site.pp. Hiera data default configuration goes into /var/lib/hiera/common.yaml.

ADD site.pp /etc/puppet/manifests/ ADD common.yaml /var/lib/hiera/common.yaml

Then we can just run puppet apply and check that no errors happened

RUN puppet apply /etc/puppet/manifests/site.pp --verbose --detailed-exitcodes || [ $? -eq 2 ]

After that it’s the usual Docker CMD configuration. In this case we call Jenkins slave jar from a shell script that handles some environment variables, with information about the Jenkins master, so it can be overriden at runtime with docker run -e.

ADD cmd.sh /cmd.sh #ENV JENKINS_USERNAME jenkins #ENV JENKINS_PASSWORD jenkins #ENV JENKINS_MASTER http://jenkins:8080 CMD su jenkins-slave -c '/bin/sh /cmd.sh'

The Puppet configuration is simple enough

node 'default' { package { 'wget': ensure => present } -> class { '::jenkins::slave': } }

and Hiera customizations, using a patched Jenkins module for this to work.

# Jenkins slave jenkins::slave::ensure: stopped jenkins::slave::enable: false

And that’s all, you can see the full source code at GitHub. If you are into Docker check out this IBM research paper comparing virtual machines (KVM) and Linux containers (Docker) performance.


Categories: FLOSS Project Planets

Openmeetings Team: Commercial Openmeetings Support FAQ

Mon, 2014-08-18 06:06

That's our attempt to keep support knowledge structured and save your time asking questions.

Contents Pricing

That's free software under Apache License, and what is even better, you can get anything for free. You can install Openmeetings yourself for free, or can get free support at openmeetings-user@incubator.apache.org.

Still you may want to save your time by asking us (support-om@dataved.ru) for assistance. The price is calculated by multiplying required hours to hourly rate (€50 / hour).

ServiceHoursInitial server & network check1Stress server & network check2Installation or upgrade of a supported system10Configuration of the integrated room hosting2Integrated room hosting (per month)1Moodle installation3Moodle plug-in installation2Upgrade of security certificates2Site migration12Simple re-branding2Admin access to the demo server1Customizations?

Supported systems include Openmeetings, a number of CMS and integration plug-ins.

Installation requires you to answer a number of questions (see Openmeetings installation questions, CMS plug-in installation questions), hence we can set up the system as you like.

Why admin access to the demo server costs something?

We need to verify the users who get access to the sensitive data. Payment is the simplest verification. Please note, you get a limited time frame to use your admin access.

Do you offer hosting packages?

Yes, we do.

Customization Is it possible to customize Openmeetings' look & feel?

Yes, that's possible. That's the most visible advantage of Openmeetings open source nature.

Is it possible to change several things at once?

Unless you are a trusted long-term customer, we start from small agile projects containing minor customizations. That would save your money, because we address issues in order which is critical for your business.

Why don't you provide estimates for the whole project?

Again, unless you are trusted long-term customer, we cannot just get estimates from your text descriptions without understanding your business and expectations. The same installation on the Russian market costs 5 times more because we work here for large enterprises who require extra security, reliability and training. Small consulting companies get affordable prices because they usually don't want us to configure their internal VPNs and routers (which they don't actally have) as a part of the installation services.

What happens with code which is developed during commercial support?

The general changes which are useful for the project (e.g. bug fixes or general new features) are developed under Apache license and committed into the open source trunk. This helps customers update to a newer version smoothly.

There may be some exceptions. For example, for specific customizations we maintain a private source control system for your project, and this costs extra.

Integration

Which CMS integration modules do you provide?

We provide Drupal, Wordpress, Joomla, Alfresco, Typo3 CMS modules (exact supported version numbers can be clarified in each particular case). We also provide integration to SugarCRM, Zimbra and some other popular systems.

Could you please send us a module for CMS (content management system) integration?

We don't provide these modules without installation services. It requires collecting integration requirements first to succeed in the integration process, and we cannot afford a failure.

Could you please send me a demo link of Openmeetings integration example?

Here is a list of available demos.

We do not provide our clients with the admin account on our demo servers. If you want to try how it works with administrative credentials, please write us and we will send you login information. Please check the Pricing section for more details.

We can install demos for you.

Is it possible to integrate Open Meetings with Microsoft .Net (JBoss, etc)?

Openmeetings integrates with other applications by means of language independent SOAP protocol. We can integrate Openmeetings with any Internet application.

Does Openmeetings work on mobile devices?

This feature is considered experimental. We can integrate your favorite SIP phone like Linphone with Openmeetings via a gateway or re-compile existing Flash client for your device. Flash support for devices varies, hence for the latter option we cannot always guarantee a project succeess.

TechnologyWhich tools & technology are behind Openmeetings?

The server side is written in Java, the client side uses OpenLaszlo, Flash and Java.

What can I do about echo?

Use headphones and manually mute microphones, or try speakerphones.

I have got a browser crash or my client hangs. What can I do?

Please get more info about the browser craches resolving here.

ProcessWhy it takes so many hours?

Don't hesitate to ask if the task estimate is more than you expect. We strive to make our process transparent.

The commercial development cycle contains the following stages: understand what should be done > create a tracker and transfer the task to a programmer > fix > compile > test > commit to the source control > verify the change with the second pair of eyes (ask another guy to complile and deploy the new source to the test server) > deploy to the production server. Here the most important part comes, you get a week to verify the changes yourself.

Is it possible to talk with somebody from commercial support team personally?

If you need a demo account or would like to talk to us personally we can set up a meeting in the room on our demo server. Just discuss the details and what time fits you by e-mail.

We insist on using OpenMeetings for such meetings. This would get you some experience of using OpenMeetings and help to understand if OpenMeetings meet your expectation or not. That is why phone and skype calls are undesirable for us.

Installation Questionnaire

Please, answer the following questions to ensure proper integration.

  • Have you tried a demo?
  • How many conference rooms do you need? Do you need rooms for webinars, or for face-to-face talks, or both? Which resolution do you like to have? How would you name rooms?
  • What kind of server do you have? You need a dedicated server with minimum 2-4 GB Ram and 2-3 Dual Core or Quad core CPU. Recommended operating systems are Ubuntu or Debian. Please transfer us administrator credentials for your server for remote access.
  • Do users and server have enough bandwidth and network quality is sufficient? You generally need 1 Mbit/sec for 4 users in 200x200px resolution.
  • Which hardware do users plan to use?
  • Are there any users who will use the system on a regular basis and benefit from the warming-up training?
  • Are the server or users protected by a firewall? Does your server have 80, 5080, 1935, 8088 ports open? Does the firewall limit RTMP or HTTP traffic?
  • How many users do you plan to have? For high load solutions do you have any production-grade level database installed? Please sents us administrator credentials or credentials of a user who can create and mange openmeetings database.
  • Do you want to have email notifications? If yes, please send us smtp server host and port, openmeetings server email address for outcoming correspondense, and credentials of smtp server user which can send emails from this address.
  • Which timezone, language and contry are default for most of your users?
  • Do you want to close open registration on your Openmeetings site?
  • May we add the link to your site as an example of successful integration to the end of this page?
  • What other requirements do you have in mind?
Installation Check

Working Openmeetings is an important prerequisite for rebranding and integration services.

  • Check that you can hear and see other participants.
  • Ensure recordings work for your installation.
  • Ensure you can successfully put word documents to the whiteboard.
Rebranding Checklist

This checklist ensures you've made steps required for the product rebranding.

  • Check that Openmeetings is installed correctly.
  • Provide Openmeetings server and remote server access credentials.
  • Provide a logo (40 pixel height).
  • Provide your company name, the sting for the conference URL (so-called context), the browser window title.
  • Specify company style colors (light background, window border color).
Integration Questionnaire

Please, answer the following questions to ensure proper integration.

  • Do you have Openmeetings correctly?
  • Which system do you want to integrate with? Which version?
  • As for integrating systems, which servers are they located at? Please provide us with administrator credentials for both systems for remote access.
  • Where on the website shall the links to the conference rooms be displayed?
  • Which rooms would be visible on site?
  • What happens with the recordings user make in the conference room? Shall the user be able to place a link to a recording he made?
  • May we add the link to your site as an example of successful integration to the end of this page?
Business Edition

We do offer a Business Edition of Openmeetings. It is compiled from the same sources, the difference is in configuration service. The service includes SIP integration to selected VoIP providers. If you opt for SIP integration, your users can start using mobile devices via SIP gateway with Openmeetings.

There is no fixed price for this edition. The required effort billing is based on standard hourly rate and depends on complexity of client network infrastructure and number of SIP providers.

Guarantees and commitments

Please take into account that you don’t buy a software product as OpenMeetings itself is free. You hire our developers for some time. Particularly, this means that we don’t ensure fixing bugs in OpenMeetings if you don’t pay for them according to our usual rate.

Good things here are that:

  • Usually we install release version for clients, it’s always good tested and stable enough.
  • Sometime we fix critical problems and make security updates for free, but all such cases should be considered individually, this is not a common rule.

To be sure that OpenMeetings is what you really need you should try our demo server before we start a project. Our installations provide exactly the functionality as demo servers do. So if you cannot get desired quality on demo, most probably you would not get it on your own installation too. Especially this is true regarding the quality of sound, video, recordings and screen sharing.

We are not responsible for the client-side problems. If your users don’t have enough bandwidth or RAM on their workstations, we cannot resolve such problems. Again, try the demo server with your equipment first to make a decision.

Unless this is separately discussed you have one week to verify if the installed system meets your expectations, and during this period we will help resolving issues you face. In case of the hosting service this week is included in the first hosting period.

We cannot offer a refund for hours which have been already spent on your project or payments for the hosting services.

Categories: FLOSS Project Planets

James Duncan: Stefan Sagmeister says you’re not a storyteller

Sun, 2014-08-17 17:00

Designer Stefan Sagemeister takes a critical stance towards the storytelling meme that’s so very popular right now. I think he’s overreaching, and possibly doing so to make a point, but it does seems like the label has been adopted by the asshats who eventually show up and ruin every good metaphor. In any case, it’s a provoking short piece.

via A Photo Editor via permalink
Categories: FLOSS Project Planets

James Duncan: The Internet’s original sin

Sun, 2014-08-17 17:00

Twenty years into the ad-supported web, Ethan Zuckerman argues that it isn’t too late to stop the seemingly inexorable move towards a centralized and heavilly surveilled Internet. I hope he’s right.

via permalink
Categories: FLOSS Project Planets

Jean-Baptiste Onofré: Apache Syncope backend with Apache Karaf

Sun, 2014-08-17 00:24

Apache Syncope is an identity manager (IdM). It comes with a web console where you can manage users, attributes, roles, etc.
It also comes with a REST API allowing to integrate with other applications.

By default, Syncope has its own database, but it can also “façade” another backend (LDAP, ActiveDirectory, JDBC) by using ConnId.

In the next releases (4.0.0, 3.0.2, 2.4.0, and 2.3.7), Karaf provides (by default) a SyncopeLoginModule allowing you to use Syncope as backend for users and roles.

This blog introduces this new feature and explains how to configure and use it.

Installing Apache Syncope

The easiest way to start with Syncope is to use the Syncope standalone distribution. It comes with a Apache Tomcat instance already installed with the different Syncope modules.

You can download the Syncope standalone distribution archive from http://www.apache.org/dyn/closer.cgi/syncope/1.1.8/syncope-standalone-1.1.8-distribution.zip.

Uncompress the distribution in the directory of your choice:

$ unzip syncope-standalone-1.1.8-distribution.zip

You can find the ready to use Tomcat instance in directory. We can start the Tomcat:

$ cd syncope-standalone-1.1.8 $ cd apache-tomcat-7.0.54 $ bin/startup.sh

The Tomcat instance runs on the 9080 port.

You can access the Syncope console by pointing your browser on http://localhost:9080/syncope-console.

The default admin username is “admin”, and password is “password”.

The Syncope REST API is available on http://localhost:9080/syncope/cxf.

The purpose is to use Syncope as backend for Karaf users and roles (in the “karaf” default security realm).
So, first, we create the “admin” role in Syncope:

Now, we can create an user of our choice, let say “myuser” with “myuser01″ as password.

As we want “myuser” as Karaf administrator, we define the “admin” role for “myuser”.

“myuser” has been created.

Syncope is now ready to be used by Karaf (including users and roles).

Karaf SyncopeLoginModule

Karaf provides a complete security framework allowing to use JAAS in an OSGi compliant way.

Karaf itself uses a realm named “karaf”: it’s the one used by SSH, JMX, WebConsole by default.

By default, Karaf uses two login modules for the “karaf” realm:

  • the PropertiesLoginModule uses the etc/users.properties as storage for users and roles (with user password)
  • the PublickeyLoginModule uses the etc/keys.properties as storage for users and roles (with user public key)

In the coming Karaf versions (3.0.2, 2.4.0, 2.3.7), a new login module is available: the SyncopeLoginModule.

To enable the SyncopeLoginModule, we just create a blueprint descriptor that we drop into the deploy folder. The configuration of the Syncope login module is pretty simple, it just requires the address of the Syncope REST API:

<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="1"> <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule" flags="required"> address=http://localhost:9080/syncope/cxf </jaas:module> </jaas:config> </blueprint>

You can see that the login module is enabled for the “karaf” realm using the jaas:realm-list command:

karaf@root()> jaas:realm-list Index | Realm Name | Login Module Class Name ----------------------------------------------------------------------------- 1 | karaf | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule

We can now login on SSH using “myuser” which is configured in Syncope:

~$ ssh -p 8101 myuser@localhost The authenticity of host '[localhost]:8101 ([127.0.0.1]:8101)' can't be established. DSA key fingerprint is b3:4a:57:0e:b8:2c:7e:e6:1c:f1:e2:88:dc:bf:f9:8c. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '[localhost]:8101' (DSA) to the list of known hosts. Password authentication Password: __ __ ____ / //_/____ __________ _/ __/ / ,< / __ `/ ___/ __ `/ /_ / /| |/ /_/ / / / /_/ / __/ /_/ |_|\__,_/_/ \__,_/_/ Apache Karaf (4.0.0-SNAPSHOT) Hit '<tab>' for a list of available commands and '[cmd] --help' for help on a specific command. Hit 'system:shutdown' to shutdown Karaf. Hit '<ctrl-d>' or type 'logout' to disconnect shell from current session. myuser@root()>

Our Karaf instance now uses Syncope for users and roles.

Karaf SyncopeBackendEngine

In addition of the login module, Karaf also ships a SyncopeBackendEngine. The purpose of the Syncope backend engine is to manipulate the users and roles back directly from Karaf. Thanks to the backend engine, you can list the users, add a new user, etc directly from Karaf.

However, for security reason and consistency, the SyncopeBackendEngine only supports the listing of users and roles defined in Syncope: the creation/deletion of an user or role directly from Karaf is disabled as those operations should be performed directly from the Syncope console.

To enable the Syncope backend engine, you have to register the backend engine as an OSGi service. Moreoever, the SyncopeBackendEngine requires two additional options on the login module: the admin.user and admin.password corresponding a Syncope admin user.

We have to update the blueprint descriptor like this:

<?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" xmlns:jaas="http://karaf.apache.org/xmlns/jaas/v1.1.0" xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.0.0"> <jaas:config name="karaf" rank="5"> <jaas:module className="org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule" flags="required"> address=http://localhost:9080/syncope/cxf admin.user=admin admin.password=password </jaas:module> </jaas:config> <service interface="org.apache.karaf.jaas.modules.BackingEngineFactory"> <bean class="org.apache.karaf.jaas.modules.syncope.SyncopeBackingEngineFactory"/> </service> </blueprint>

With the SyncopeBackendEngineFactory register as an OSGi service, for instance, we can list the users (and their roles) defined in Syncope.

To do it, we can use the jaas:user-list command:

myuser@root()> jaas:realm-list Index | Realm Name | Login Module Class Name ----------------------------------------------------------------------------- 1 | karaf | org.apache.karaf.jaas.modules.syncope.SyncopeLoginModule myuser@root()> jaas:realm-manage --index 1 myuser@root()> jaas:user-list User Name | Group | Role ------------------------------------ rossini | | root rossini | | otherchild verdi | | root verdi | | child verdi | | citizen vivaldi | | bellini | | managingDirector puccini | | artDirector myuser | | admin

We can see all the users and roles defined in Syncope, including our “myuser” with our “admin” role.

Using Karaf JAAS realms

In Karaf, you can create any number of JAAS realms that you want.
It means that existing applications or your own applications can directly use a realm to delegate authentication and authorization.

For instance, Apache CXF provides a JAASLoginInterceptor allowing you to add authentication by configuration. The following Spring or Blueprint snippet shows how to use the “karaf” JAAS realm:

<jaxws:endpoint address="/service"> <jaxws:inInterceptors> <ref bean="authenticationInterceptor"/> </jaxws:inInterceptors> </jaxws:endpoint> <bean id="authenticationInterceptor" class="org.apache.cxf.interceptor.security.JAASLoginInterceptor"> <property name="contextName" value="karaf"/> </bean>

The same configuration can be applied for jaxrs endpoint instead of jaxws endpoint.

As Pax Web leverages and uses Jetty, you can also define your Jetty security configuration in your Web Application.
For instance, in the META-INF/spring/jetty-security.xml of your application, you can define the security contraints:

<?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans.xsd"> <bean id="loginService" class="org.eclipse.jetty.plus.jaas.JAASLoginService"> <property name="name" value="karaf" /> <property name="loginModuleName" value="karaf" /> </bean> <bean id="constraint" class="org.eclipse.jetty.util.security.Constraint"> <property name="name" value="BASIC"/> <property name="roles" value="user"/> <property name="authenticate" value="true"/> </bean> <bean id="constraintMapping" class="org.eclipse.jetty.security.ConstraintMapping"> <property name="constraint" ref="constraint"/> <property name="pathSpec" value="/*"/> </bean> <bean id="securityHandler" class="org.eclipse.jetty.security.ConstraintSecurityHandler"> <property name="authenticator"> <bean class="org.eclipse.jetty.security.authentication.BasicAuthenticator"/> </property> <property name="constraintMappings"> <list> <ref bean="constraintMapping"/> </list> </property> <property name="loginService" ref="loginService" /> <property name="strict" value="false" /> </bean> </beans>

We can link the security constraint in the web.xml:

<?xml version="1.0" encoding="UTF-8"?> <web-app version="2.5" xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd"> <display-name>example_application</display-name> <welcome-file-list> <welcome-file>index.jsp</welcome-file> </welcome-file-list> <security-constraint> <display-name>authenticated</display-name> <web-resource-collection> <web-resource-name>All files</web-resource-name> <description/> <url-pattern>/*</url-pattern> </web-resource-collection> <auth-constraint> <description/> <role-name>user</role-name> </auth-constraint> </security-constraint> <login-config> <auth-method>BASIC</auth-method> <realm-name>karaf</realm-name> </login-config> <security-role> <description/> <role-name>user</role-name> </security-role> </web-app>

Thanks to that, your web application will use the “karaf” JAAS realm, which can delegates the storage of users and roles to Syncope.

Thanks to the Syncope Login Module, Karaf becomes even more flexible for the authentication and authorization of the users, as the users/roles backend doesn’t have to be embedded in Karaf itself (as for the PropertiesLoginModule): Karaf can delegates to Syncope which is able to façade a lot of different actual backends.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-08-15

Fri, 2014-08-15 18:58
Categories: FLOSS Project Planets

James Duncan: A case against time zones

Fri, 2014-08-15 17:00

Vox’s Matt Yglesias makes a case for eliminating time zones entirely based on the fact that you could more easily coordinate things between locations on the planet. I’d settle for just getting rid of daylight savings time.

Categories: FLOSS Project Planets

James Duncan: On living elsewhere right now

Fri, 2014-08-15 17:00

There are no shortage of things to be upset about right now in America. The recent events in Ferguson are just the tip of the iceberg when it comes to the issues of race and the militarization of police forces. But if those problems don’t bother you so much, you could always choose the abuse of state surveillance powers, the obvious buy-off of politicians by moneyed interests, the plethora of firearms-related problems and incidents, the use of military force in questionable ways around the globe, the unflinching support of some governments who abuse human rights while denouncing others for far less, and so many more. Take your pick.

While I didn’t become an expat for any of those reasons—I simply took an interesting job to work with people I like in an interesting city where I could live with my girlfriend who didn’t want to leave Europe—the fact all of those things and more are happening make it really easy to not miss living at home right now.

Of course, it’s not a utopia here. There are ugly things going on in Europe right now. Xenophobia and fascism are seeing a rise. There are troubles not far away in Ukraine. And the foundations of the European financial system still need quite a bit of work. But, it’s probably impossible to find the perfect place right now that fits every item on your personal wish list. For me, right now, Berlin is working out pretty well. As noted in a conversation on twitter between Joe Stump, Alex Payne, and Rabble, it’s a nice progressive international bubble.

On the other hand, sometimes I’m surprised that I don’t miss living in America. At least I haven’t yet missed it—other than a longing for a few favorite restaurants, seeing my family, and hanging out with people who are dear to me. But weeks like this, I’m not very surprised at all.

Frankly, that kind of makes me a little bit sad. After all, I agree with Anil Dash when he says that it would be better if people who care would stick around.

Categories: FLOSS Project Planets

Tim Bish: Coming in ActiveMQ v5.11 In-Memory Scheduler Store

Fri, 2014-08-15 12:21
Up to this point your only option for doing scheduled message delivery in ActiveMQ required that you start a broker with persistence enabled.  Well, that's not entirely true, if you wanted to apply some configuration magic and start a broker in non-persistent mode and add an instantiated version of the KahaDB based JobSchedulerStore you could but that's not really ideal when you don't really need or want full persistence such as during unit tests or in some embedded cases. 

So from here on out if you want to use scheduled message dispatch but don't require the overhead of offline storage for your broker then just create a Broker instance that has persistence disabled and scheduler support enabled.

Here's an example of an embedded Broker that will use an in-memory scheduler:

        BrokerService answer = new BrokerService();
        answer.setPersistent(false);
        answer.setSchedulerSupport(true);

There's a lot more changes under the hood in the scheduled message bits coming in ActiveMQ 5.11.0 but I'll cover that in another post.

Categories: FLOSS Project Planets

Sergey Beryozkin: Learn JOSE and become a better Web Service Developer

Fri, 2014-08-15 05:43
The work around OAuth2 and JOSE in particular has inspired me.

So much that I've ordered several books from Amazon.co.uk - and it's been quite a while since the idea of buying a book occurred to me; and several books in the age of Google ? - see, it did inspire me.

Sometimes we the developers think that we know all and if not all then we think we won't need that extra piece of knowledge, being the experts we are. The software engineering is not easy. We have the deadlines and our regular work to be well taken care of. No time for reading the books: the more busier and older we become the less time we have.

This is why I like OAuth2 and JOSE. JOSE, specifically, is a very fine effort, it represents a set of nicely aligned specifications tackling the various issues related to signing and encrypting the arbitrary payloads and using simple and effective JSON metadata to describe the signature and encryption operations. It's led by the people who understand what they do. JOSE deals only with the best/most trusted/most understood signature and encryption algorithms. It's a set of 'books' about the latest in the cryptography.

It is already starting and will affect the way we do secure HTTP services. I already claimed it in the earlier post about OAuth2 and repeat it again here.

Learn JOSE, understand it, start using it, become a better engineer !
Categories: FLOSS Project Planets

Sergey Beryozkin: JAX-RS is not only about REST

Fri, 2014-08-15 05:24
I've been planning to post this 'philosophical' piece for a while.

The JAX-RS specification (Java API for RESTful services) has really got off the ground long time ago. JAX-RS 2.0 with its new brilliant features, with three JAX-RS 2.0 frameworks around (there will possibly be more, we never know), is and will further contribute to the popularity of JAX-RS.

JAX-RS 2.1 work will go ahead  soon enough and it will be another great specification, I've no doubt the spec leads will take care of making that happen, same way they did for 2.0  :-).

The central line of this blog though is that JAX-RS is actually not only about REST. It may sound like a shock to some people but the beauty of this specification is that it has completely re-opened the HTTP web service development space and will continue doing so for quite a few more years to come.

It's an important fact: developers always want to do something new, even though it's a fact that existing Web service technologies has proven to be able to deliver: many many people have written SOAP endpoints that work, many many people have designed endpoints according to REST style, the WEB rules. But REST is not the end of the web services road, it is only a set of proven rules.

We all know many JAX-RS endpoints are not necessarily that RESTful, in a a nutshell they support simple HTTP endpoints, often with 2 HTTP verbs only max - and it is absolutely OK: JAX-RS does and will help no matter how far one would like to go in their HTTP endpoint design.






Categories: FLOSS Project Planets

Matthias Wessendorf: UnifiedPush Server: Docker, WildFly and another Beta release!

Fri, 2014-08-15 05:07

Today we are announcing the second beta release of our 1.0.0 version. This release contains several improvements

  • WildFly 8.x support
  • PostgreSQL fix
  • Scheduler component for deleting analytics older than 30 days
  • Improvements on the AdminUI
  • Documentation

The complete list of included items are avialble on our JIRA instance

With the release of the server we also released new versions of the senders for Java and Node.js!

Docker

The team is extremely excited about the work that Docktor Bruno Oliveira did on our new Docker images:

Check them out!

Documentation

As mentioned above, the documentation for the UnifiedPush Server has been reorganized, including an all new guide on how to use the UnifiedPush Server.

Demos

To get easily started using the UnifiedPush Server we have a bunch of demos, supporting various client platforms:

  • Android
  • Apache Cordova (with jQuery and Angular/Ionic)
  • iOS

The simple HelloWorld examples are located here. Some more advanced examples, including a Picketlink secured JAX-RS application, as well as a Fabric8 based Proxy, are available here.

Docker

Bruno Oliveira did Docker images for the Quickstart as well:

Feedback

We hope you enjoy the bits and we do appreciate your feedback! Swing by on our mailing list! We are looking forward to hear from you!

NOTE: the Openshift online offering will be updated w/in the next day or two

Enjoy!


Categories: FLOSS Project Planets

Justin Mason: Links for 2014-08-14

Thu, 2014-08-14 18:58
Categories: FLOSS Project Planets

James Duncan: Police in Ferguson break silence

Thu, 2014-08-14 17:00

The day after his force was relieved of duty and control of the situation handed over to the Missouri Highway Patrol, the police chief of Ferguson announced that nothing specific went into the decision to release the name of the officer who killed Michael Brown.

Categories: FLOSS Project Planets

Lars Heinemann: Improved server adapters for JBoss Fuse Tooling

Thu, 2014-08-14 12:43
Today I'd like to share some information regarding the upcoming changes to the server adapters in JBoss Fuse Tooling.

In the versions up to what is included in JBoss Tools Integration Stack 4.1.5 the adapters could only do start and stop the server and connect to the servers Karaf Shell via SSH. All the publishing / deploy logic was split all over several views and perspectives and that isn't a good user experience at all - so we decided to give that part some more love.

Deployment / PublishingIn the upcoming version we abandoned all the existing deployment bits and concentrate the deployment in the view where it belongs to...the Servers view. We are now using the standard publishing functions of the Eclipse servers view to deploy projects to servers. That can be done via the Add/Remove context menu item on each server node. You will see a dialog like this:



In that dialog you can deploy / undeploy projects you have in your workspace. If the server is running and you checked the automatic publish option in the servers properties then the deployment / removal will be done right after you finish that dialog. Also if you do changes to the project in your workspace it will be republished automatically if enabled in the settings. Once the project is deployed you can see it below the servers node in the server view like this:



The context menu for the module node will provide actions for starting / stopping and removing the module from the server.

That's easy deployment now, isn't it?


What else?That was not the only improvement!

Improved New Server WizardThe wizard for creating a new server has been improved. You can now download all the supported servers and you will be provided a progress bar for that download too. Your passwords for the servers are now stored encrypted inside the Eclipse secure storage. And finally you can now download the JBoss Fuse servers too which was not possible in the past because of some restrictions of the JBoss Portal.

Updated Server AdaptersWe updated the server adapters for Apache Karaf, Apache ServiceMix and JBoss Fuse to contain the latest versions available. We also added a new adapter for Fabric8 Karaf servers.

Server Debug SupportYes, finally you will be able to add breakpoints to your Java classes and debug your projects on the server...it's coming soon!

JMX supportYou can now enable JMX content for the servers view which will give you access to the MBeans and additional nodes available via JMX. No more need to switch to the JMX view for that!

Want to try out the new stuff? Check out our nightly update site for Eclipse Luna but remember it's NOT a stable release in that site!

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-08-13

Wed, 2014-08-13 18:58
Categories: FLOSS Project Planets