Planet Apache

Syndicate content
Updated: 1 day 14 hours ago

Lars Heinemann: Where can I get JBoss Fuse Tooling and how to install it?

Mon, 2014-07-21 08:16
These questions have been raised over and over again so it's time to cover that topic in a bit more detail.In the following posts I'd like to show you how to install our tooling for Apache Camel into Eclipse. We will cover two versions of the tooling - the Kepler based GA version which is already shipped with the JBoss Integration Stack and the upcoming tooling for the Luna version of Eclipse.
So these are your options...

Option 1: The Eclipse Kepler based version which is available as GA version coming with the JBoss Integration Stack
- Click here -


Option 2:The Eclipse Luna based version which will be released for the Luna based JBDS 8 and the Integration Stack. That version is currently being worked on and there is no release yet. If you want to go for that option you will use a development nightly build - but it will contain the latest set of new features.
- Click here -

Categories: FLOSS Project Planets

Lars Heinemann: How to install JBoss Fuse Tooling into Eclipse Kepler

Mon, 2014-07-21 08:15
Installation of JBoss Fuse Tooling for Eclipse Kepler:
For this guide we will use a vanilla Eclipse Kepler from the Eclipse Download page and install JBoss Fuse Tooling from the update site directly instead of installing JBDS or the Integration Stack. If you were looking for those installation methods there is a good blog post from Paul Leacu available HERE.

Lets choose the download for Eclipse Standard 4.3.2. Once the download finished unpack the archive onto your hard drive and start Eclipse by executing the Eclipse launcher. Choose your workspace and then you should find yourself in the Eclipse Welcome Screen. (only if you start it for the first time)
Welcome to your new Eclipse Kepler installation...
Now lets open the Help menu from the top menu bar and select the entry "Install new Software".

Lets define a new update site location for JBoss Fuse Tooling. Click the "Add" button next to the combo box for the update sites.


Now enter the following...

Name: 
JBoss Fuse Tooling (Kepler)
Location: 
http://download.jboss.org/jbosstools/updates/integration/kepler/integration-stack/fuse-tooling/7.2.0/all/repo/

Click the OK button to add the new update site to your Eclipse installation.
A new dialog will open up and ask you what features to install from the new update site:


There are three features available from that update site:

JBoss Fuse Camel Editor Feature:
This feature gives you the route editor for Apache Camel Routes, a new project wizard to setup new integration modules and the option to launch your new routes locally for testing purposes.

JBoss Fuse Runtimes Feature:
This allows you to monitor your routes, trace messages, edit remote routes and there are also Fabric bits available to work with Fuse Fabric.

JBoss Fuse Server Extension Feature:
When installing this feature you will get server adapters for Apache ServiceMix, Apache Karaf and JBoss Fuse runtimes. It allows you to start / stop those servers and to connect to their shell. Deployment options are also available.

Once you are done with selecting your features click on the Next button. The following screen will show you what will be installed into your Kepler installation.


You can review your selection and make changes to it by clicking the Back button if needed. If all is fine you can click the Next button instead. This will lead you to the license agreement screen. You need to agree to the licenses in order to install the software.

Once that is done you can click again the Next button and the installation will start by downloading all needed plugins and features. Once that is done the new features will be installed into your Kepler folder. Before that happens Eclipse will warn you that you are going to install unsigned content. This happens because our plugins are not signed but thats nothing to worry about. Just click OK to do the installation.


After everything is installed Eclipse will ask you for a restart.


Click the Yes button to restart Eclipse. When the restart is done you will be able to select the Fuse Integration perspective from the perspectives list.




Well done! You've installed JBoss Fuse Tooling for Eclipse Kepler! 
Categories: FLOSS Project Planets

Christian Grobmeier: Apache Log4j 2.0 is stable.

Sun, 2014-07-20 17:00

We did it.

Categories: FLOSS Project Planets

Marco Di Sabatino Di Diodoro: Managing Hippo CMS users with Apache Syncope

Sun, 2014-07-20 17:00
This post describes how to manage Hippo CMS users through Apache Syncope
Categories: FLOSS Project Planets

Adrian Sutton: From Java to Swift

Sun, 2014-07-20 06:47

Ever since the public beta of OS X I’ve been meaning to get around to learning Objective-C but for one reason or another never found a real reason to. I’ve picked up bits and pieces of it and even written a couple of working utilities but those were pretty much entirely copy/paste from various sources. Essentially they were small enough and short-lived enough that I only needed the barest grasp of Objective-C syntax and no understanding of the core philosophies and idioms that really make a language what it is. This is probably best exemplified by the approach to memory management those utilities took: it won’t run for long, so just let it leak.

I do however have a ton of experience in Java and JavaScript plus knowledge and experience in a bunch of other languages to a wide range of extents. In other words, I’m not a complete moron, I’m just a complete moron with Objective-C.

Anyway, obviously when Swift was announced it was complete justification for my ignoring Objective-C all these years and interesting enough for me to actually get around to learning it.

So I’ve been building a very small little utility in swift so I can pull out information from OS X’s system calendar from the command line and push it around to various places that I happen to want it and can’t otherwise get it. The code is up on GitHub if you’re interested – code reviews and patches most welcome. It’s been a great little project to get used to Swift the language without spending too much time trying to learn all the OS X APIs.

Language Features

Swift has some really nice language features that make dealing with common scenarios simple and clear. Unlike many languages it doesn’t seem to go too far with that though – it doesn’t seem likely that people will abuse its features and create overly complex or overly succinct code.

My favourite feature is the built-in optional support. A variable of type String is guaranteed to not be null, a variable of type String? might be. You can’t call any methods from String on a String? variable directly you have to unwrap it first – confirming it isn’t null. That would be painful if it weren’t for the ‘if let’ construct:

let events: String? = ""
if let events = events { events.utf16count()
}

I’ve dropped into a habit here which might be a bit overly clever – naming the unwrapped variable the same as the wrapped one. The main reason for this is that I can never think of a better name. I figure it’s much like have a if events != nil check.

APIs

Calling Swift a new language is correct but it would almost be more accurate to call it a new syntax instead. Swift does have its own core API which is unique to it, but that’s very limited. For the most part you’re actually dealing with the OS X (or iOS) APIs which are shared with Objective-C. Thus, people with experience developing in Objective-C quite obviously have a huge head start with Swift.

The other impact of sharing so many APIs with Objective-C is that some of the benefits of Swift get lost – especially around strict type checking and null reference safety. For example retrieving a list of calendar events is done via the EventKit method:

func eventsMatchingPredicate(predicate: NSPredicate!) -> [AnyObject]!

which is helpfully displayed inside Xcode using Swift syntax despite it being a pre-existing Objective-C API and almost certainly still implemented in Objective-C. However if you look at the return type you see the downside of inheriting the Objective-C APIs: the method documentation says it returns [EKEvent] but the actual declaration is [AnyObject]!  So we’ve lost both type safety and null reference safety because Objective-C doesn’t have non-nullable references or generic arrays. It’s not a massive loss because those APIs are well tested and quite stable so we’re extremely unlikely to be surprised by a null reference or unexpected type in the array, but it does require some casting in our code and requires humans to read documentation and check if something can be null rather than having the compiler do it for us.

If Swift were intended to be a language that competes with Java, python or ruby the legacy of the Objective-C APIs would be a real problem. However, Swift is designed specifically to work with those APIs, to be a relatively small but powerful step of OS X and iOS developers. In that context the legacy APIs are really just a small bump in the road that will smooth out over time.

Xcode

The other really big thing a Java developer notices switching to Swift is what a Java developer notices when switching to any other language – the tools suck. In particular, the Java IDEs are superb these days and make writing, navigating and refactoring code so much easier. Xcode doesn’t even come close. The available refactorings are quite primitive even for C and Objective-C and they aren’t supported at all for Swift.

The various project settings and preferences in Xcode are also a complete mystery – and judging from the various questions and explanations on the internet it doesn’t seem to be all that much clearer even to people with lots of experience. In reality I doubt its really much different to Java which also has a ridiculous amount of complexity in IDE settings. The big difference is that in the Java world you (hopefully) start out by learning the basics using the standard command line tools directly. Doing so gives you a good understanding of the build and runtime setup and makes it much clearer what is controlling how your software is built and what is just setting up IDE preferences. Xcode does provide a full suite of command line developer tools so hopefully I can learn more about them and get that basic understanding.

Finally, Xcode 6 beta 3 is horribly buggy. It’s a beta so I can forgive that but I’m surprised at just how bad it is even for a beta.

Cocoa Pods

This was a delight to stumble across.  Adding a dependency to a project was a daunting prospect in Xcode (jar files are surprisingly brilliant). I really don’t know what it did but it worked and I’m grateful. Libraries that aren’t available as pods are pretty much dead to me now. There does seem to be a pretty impressive array of libraries available for a wide range of tasks. Currently all of them are Objective-C libraries so you have to be able to understand Objective-C headers and examples and convert them to Swift but it’s not terribly difficult (and trivial for anyone with an Objective-C background).

Overall

Swift has a good feel about it – lots of neat features that keep code succinct. Also it’s very hard not to like strict type checking with good type inference. With Apple pushing Swift as a replacement for Objective-C over time the libraries and APIs will become more and more “Swift-like”. Xcode should improve to at least offer the basic refactorings it has for other languages and stabilise which will make it a workable IDE – exceeding the capabilities of what’s available for a lot of languages.

 

Most importantly, the vast majority of existing Objective-C developers seem to quite like it – plenty of issues raised as well, but overall generally positive.

I think the future for Swift looks bright.

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-07-18

Fri, 2014-07-18 18:58
Categories: FLOSS Project Planets

Bryan Pendleton: Walter Munk's wave experiment

Fri, 2014-07-18 17:34

The NPR website is carrying a nifty article about Walter Munk: The Most Astonishing Wave-Tracking Experiment Ever.

Yes, I'm asking a wave to tell me where it was born. Can you do that? Crazily enough, you can. Waves do have birthplaces. Once upon a time, one of the world's greatest oceanographers asked this very question.

Munk's experiment was not easy to carry out:

From a beach you can't see an old set of swells go by. They aren't that noticeable. Walter and his team had highly sensitive measuring devices that could spot swells that were very subtle, rising for a mile or two, then subsiding, with the peak being only a tenth of a millimeter high.

But what a fascinating result:

The swells they were tracking, when they reached Yakutat, Alaska, had indeed traveled halfway around the world. Working the data backward, Walter figured that the storm that had generated those swells had taken place two weeks earlier, in a remote patch of ocean near a bunch of snowy volcanic islands — Heard Island and the McDonald Islands, about 2,500 miles southwest of Perth, Australia.

Neat article, and neat to learn about Professor Munk, who I hadn't known of previously.

I wonder if he'd enjoy a visit to see the University of Edinburgh's FloWave simulator?.

Categories: FLOSS Project Planets

Tim Bish: ActiveMQ-CPP v3.8.3 Released

Fri, 2014-07-18 14:30
Its been awhile since the last time we released a new version of ActiveMQ-CPP so I pleased to announce that version 3.8.3 is now out.  This release is a bug fix release that addresses some threading issues in the ConnectionAudit class as well as some enhancements to the SSL code to allow for finding the domain name of the broker in a Certificate that has multiple CN values. 

The release page has all the details so head on over and download the new release.
Categories: FLOSS Project Planets

Joe Brockmeier: Happy 21st Birthday, Slackware – and Thanks, Patrick!

Thu, 2014-07-17 20:04

21 years ago today, Patrick J. Volkerding announced the 1.00 release of Slackware Linux to the comp.os.linux newsgroup. As Patrick wrote at the time, “This is a complete installation system designed for systems with a 3.5″ boot floppy. It has been tested extensively with a 386/IDE system.” Times, and technology, have changed quite a bit — but Slackware continues to stay true to Patrick’s original vision and provide users with “the most ‘UNIX-like’ Linux distribution out there” with simplicity and stability “while retaining a sense of tradition.”

Slackware had just turned five when I first discovered it and, by extension, Linux. It was the first Linux distribution that I’d ever used and it was a wonderful platform to learn on. Made even better by the fact that Patrick was quick to respond to emails asking for support, and provided gentle guidance to updating XFree86 so that I could actually use X on my blazing fast Pentium 133MHz machine with eight whopping megabytes of RAM.

Slackware wasn’t quite the first Linux distribution, but it outlived its predecessors as well as many Linux distributions that came after. Slackware has not only continued to provide new releases at steady intervals year after year, but it’s done so with a fairly small (but mighty!) core team of developers led by Patrick.

If you’re in the Linux or open source community, you should take a minute today to raise a glass to toast the Slackware distribution. I’ll be hoisting a beer (though a better one than PBR…) to Slackware, and its team. Thanks for introducing me to Linux, for staying true to your vision, and for providing so many users with so much goodness over the years. Here’s to Slackware, Patrick, and all the other folks who’ve made Slackware great over the years – and to many more releases and birthdays to come!

Categories: FLOSS Project Planets

Justin Mason: Links for 2014-07-17

Thu, 2014-07-17 18:58
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-07-16

Wed, 2014-07-16 18:58
Categories: FLOSS Project Planets

Michael McCandless: Build your own finite state transducer

Wed, 2014-07-16 16:23
Have you always wanted your very own Lucene finite state transducer (FST) but you couldn't figure out how to use Lucene's crazy APIs?

Then today is your lucky day! I just built a simple web application that creates an FST from the input/output strings that you enter.

If you just want a finite state automaton (no outputs) then enter only inputs, such as this example:



If all of your outputs are non-negative integers then the FST will use numeric outputs, where you sum up the outputs as you traverse a path to get the final output:

Finally, if the outputs are non-numeric then they are treated as strings, in which case you concatenate as you traverse the path:

The red arcs are the ones with the NEXT optimization: these arcs do not store a pointer to a node because their to-node is the very next node in the FST. This is a good optimization: it generally results in a large reduction of the FST size. The bolded arcs tell you the next node is final; this is most interesting when a prefix of another input is accepted, such as this example:



Here the "r" arc is bolded, telling you that "star" is accepted. Furthermore, that node following the "r" arc has a final output, telling you the overall output for "star" is "abc".

The web app is a simple Python WSGI app; source code is here. It invokes a simple Java tool as a subprocess; source code (including generics violations!) is here.
Categories: FLOSS Project Planets

Bryan Pendleton: What an innocuous headline...

Wed, 2014-07-16 09:15

7 safe securities that yield more than 4%

Eveans and his team analyze more than 7,000 securities worldwide but only buy names that offer payouts no less than double the yield of the overall stock market — as well as reasonable valuation and competitive advantages that will keep earnings growing over time.

Sounds like a pleasant article to read, no?

Well, it turns out that the companies that they are recommending you invest in are:

  • Cigarette companies (Philip Morris)
  • Oil companies (Vanguard Resources, Williams Partners)
  • Leveraged buyout specialists (KKR)

I guess the good news is that they didn't include any arms dealers or pesticide manufacturers.

Categories: FLOSS Project Planets

Edward J. Yoon: 진심은 통하지 않는다.

Wed, 2014-07-16 08:12
시장을 더럽히는자들이 오피니언 리더인 경우가 더러 있다. 정치도 그렇고 IT도 ..
진심은 잘 통하지 않는다.

세상 돌아가는 원리는 순수한 마음이 아니라 기브앤 테이크이기 때문에.
Categories: FLOSS Project Planets

Edward J. Yoon: 쓸모없는 상자

Wed, 2014-07-16 04:52


간만에 놀랄만한 물건을 발견 +_+;
Categories: FLOSS Project Planets

Justin Mason: Links for 2014-07-15

Tue, 2014-07-15 18:58
Categories: FLOSS Project Planets

Edward J. Yoon: Kaggle ...

Tue, 2014-07-15 08:49
미래적인 비지니스 모델이라고 생각하고 있다. 개방형, 집단지성, 리베뉴 쉐어링 요런건 완전 트렌드다. 카피는 쉽지만 스타트업만이 할 수 있다. 구글도 못한다. 외딴섬 제 3자의 스몰 조직이라 가능한 모델. 컨설팅펌이나 고민해볼만한 매물로 M&A도 힘들고 결국 독자생존해야한다.

Dead or Alive.
Categories: FLOSS Project Planets

Edward J. Yoon: MapR 구글로부터 $110M 투자유치

Tue, 2014-07-15 08:37
후덜더르. 창업자가 구글 출신이긴 하지만. 가만 보면 요즘 M&A나 투자는 대부분 IT, 좀 더 좁히면 빅 데이터 쪽이 강세다.

반면 한국은 여전히..

이유는 간단하다고 본다. 한국이 드라마 한류를 만든것과 일본의 애니메이션이 강세인 이유랑 비슷할 듯.. 결국 수요와 공급 논리로 설명 되지 않을까 싶다.
Categories: FLOSS Project Planets

Carlos Sanchez: Anatomy of a DevOps Orchestration Engine: (I) Workflow

Tue, 2014-07-15 08:30

At MaestroDev we have been building what may be called, for lack of a better name, a DevOps Orchestration Engine, and is long overdue to talk about what we have been doing there and most importantly, how.

The basics of the application is to tie together the different systems involved in a Continuous Delivery cycle: Continuous Integration server, SCM, build tools, packaging tools, cloud resources, notification systems,… and streamline the process through these different tools. So it hooks into a bunch of popular tools to orchestrate interactions between them, an example:

This workflow, or as we call it, composition, will

  1. download a war file from a Maven repository (previously built by Jenkins)
  2. start an Amazon EC2 instance with Tomcat preinstalled
  3. deploy the war
  4. checkout the acceptance tests from Git
  5. run some tests with Maven (Selenium tests using SauceLabs) against that instance
  6. wait for an user to confirm before moving to the next step (to record the human approval or to do some extra manual tests if needed)
  7. destroy the Amazon EC2 instance

Maestro provides a nice web UI that gives visibility over the composition execution and an aggregated log from all the tools that run during the composition in a single place.

 

But the power comes with the combination of compositions together, as there are tasks for typical flows, such as running forking and joining compositions, call another composition in case of a failure, or waiting for a composition to finish.

Here we have a more complex setup with five compositions tied together.

  • * – A composition that calls compositions 1 and 2.
  • 1 – A Jenkins build
  • 2 – The acceptance tests composition mentioned before
  • 2a – Notification composition in case the acceptance tests fail
  • 3 – Deployment to production

So you can see that compositions are not just limited to build, test, deploy. The tasks can be combined as needed to build your specific process.

Tasks are contributed by plugins, easily written in Ruby or Java, and define what fields are needed in the UI and what to do with those fields and the composition context. Maestro includes a lot of prebuilt tasks, publicly available on GitHub, from executing shell scripts to Jenkins job creation or Amazon Route 53 record management, but anything.

All the tasks share a common context and use sensible defaults, so if the scm checkout path is not defined it creates a specific working directory for the composition, and that is reused by the Maven, Ant,… plugins to avoid copying and pasting the fields. That’s also how a EC2 deprovision task doesn’t need any configuration if there was a provision task before in the composition, it will just deprovision those instances started previously in the composition by default.

You can take a look at our Maestro public instance, showing some examples and builds of public projects, mostly Puppet modules that are automatically built and deployed to the Puppet Forge, and Maestro plugins build and release compositions. In next posts I’ll be talking about the technologies used and distributed architecture of Maestro.

Next: (II) Architecture


Categories: FLOSS Project Planets

Bryan Pendleton: git clone vs fork

Mon, 2014-07-14 21:33

Two words that you'll often hear people say when discussing git are "fork" and "clone".

They are similar; they are related; they are not interchangeable.

The clone operation is built into git: git-clone - Clone a repository into a new directory.

Forking, on the other hand, is an operation which is used by a certain git workflow, made popular by GitHub, called the Fork and Pull Workflow:

The fork & pull model lets anyone fork an existing repository and push changes to their personal fork without requiring access be granted to the source repository. The changes must then be pulled into the source repository by the project maintainer. This model reduces the amount of friction for new contributors and is popular with open source projects because it allows people to work independently without upfront coordination.

The difference between forking and cloning is really a difference in intent and purpose:

  • The forked repository is mostly static. It exists in order to allow you to publish work for code review purposes. You don't do active development in your forked repository (in fact, you can't; because it doesn't exist on your computer, it exists on GitHub's server in the cloud).
  • The cloned repository is your active repo. It is where you do all your work. But other people generally don't have access to your personal cloned repo, because it's on your laptop. So that's why you have the forked repo, so you can push changes to it for others to see and review
This picture from StackOverflow helps a lot: What is the difference between origin and upstream in github.

In this workflow, you both fork and clone: first you fork the repo that you are interested in, so that you have a separate repo that is clearly associated with your GitHub account.

Then, you clone that repo, and do your work. When and if you wish, you may push to your forked repo.

One thing that's sort of interesting is that you never directly update your forked repo from the original ("upstream") repo after the original "fork" operation. Subsequent to that, updates to your forked repo are indirect: you pull from upstream into your cloned repo, to bring it up to date, then (if you wish), you push those changes into your forked repo.

Some additional references:

  • Fork A RepoWhen a repository is cloned, it has a default remote called origin that points to your fork on GitHub, not the original repository it was forked from. To keep track of the original repository, you need to add another remote named upstream.
  • Stash 2.4: Forking in the Enterprise In Stash, clicking the ‘Fork’ button on a repository creates a copy that is tracked by Stash and modified independently of the original repository, insulating code from the original repository to unwanted changes or errors.
  • Git Branching and Forking in the Enterprise: Why Fork?In recent DVCS terminology a fork is a remote, server-side copy of a repository, distinct from the original. A clone is not a fork; a clone is a local copy of some remote repository.
  • Clone vs. Fork if you want to make changes to any of its cookbooks, you will need to fork the repository, which creates an editable copy of the entire repository (including all of its branches and commits) in your own source control management (e.g. GitHub) account. Later, if you want to contribute back to the original project, you can make a pull request to the owner of the original cookbook repository so that your submitted changes can be merged into the main branch.
Categories: FLOSS Project Planets