Planet Debian

Syndicate content
Planet Debian - http://planet.debian.org/
Updated: 10 hours 13 min ago

Matthias Klumpp: AppStream/DEP-11 Debian progress

Sat, 2014-08-16 09:50

There hasn’t been a progress-report on DEP-11 for some time, but that doesn’t mean there was no work going on on it.

DEP-11 is Debian’s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn’t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn’t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it’s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it’s in-memory data cache and enabling the use of GNOME-Software on Debian.

So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as “Components-<arch>.yml.gz” files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) – these things can easily be implemented later.

The remaining steps will be to polish the code and make it merge-ready for Debian’s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) – if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it’s Xapian db. This will lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11.

What Debian will ultimately gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to “specialized package managers”, like Python’s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can’t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize – but we are on a good way.

So, if you want some more information about my student’s awesome work, you can read his blogpost about it. He will also be at Debconf’14 (Portland). (I can’t make it this time, but I surely won’t miss the next Debconf)

Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1.

A small example on how this data will look like can be found here – a larger, actual file is available here. Any questions and feedback are highly appreciated.

Categories: FLOSS Project Planets

Bits from Debian: Debian turns 21!

Sat, 2014-08-16 04:45

Today is Debian's 21st anniversary. Plenty of cities are celebrating Debian Day. If you are not close to any of those cities, there's still time for you to organize a little celebration!

Happy 21st birthday Debian!

Categories: FLOSS Project Planets

Paul Tagliamonte: PyGotham 2014

Fri, 2014-08-15 13:54

I’ll be there this year!

Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!

If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.

Categories: FLOSS Project Planets

Aurelien Jarno: Intel about to disable TSX instructions?

Fri, 2014-08-15 11:02

Last time I changed my desktop computer I bought a CPU from the Intel Haswell family, the one available on the market at that time. I carefully selected the CPU to make sure it supports as many instructions extensions as possible in this family (Intel likes segmentation, even high-end CPUs like the Core i7-4770k do not support all possible instructions). I ended-up choosing the Core i7-4771 as it supports the “Transactional Synchronization Extensions” (Intel TSX) instructions, which provide transactional memory support. Support for it has been recently added in the GNU libc, and has been activated in Debian. By choosing this CPU, I wanted to be sure that I can debug this support in case of bug report, like for example in bug#751147.

Recently some computing websites started to mention that the TSX instructions have bugs on Xeon E3 v3 family (and likely on Core i7-4771 as they share the same silicon and stepping), quoting this Intel document. Indeed one can read on page 49:

HSW136. Software Using Intel TSX May Result in Unpredictable System Behavior

Problem: Under a complex set of internal timing conditions and system events, software using the Intel TSX (Transactional Synchronization Extensions) instructions may result in unpredictable system behavior.
Implication: This erratum may result in unpredictable system behavior.
Workaround: It is possible for the BIOS to contain a workaround for this erratum.

And later on page 51:

Due to Erratum HSw136, TSX instructions are disabled and are only supported for software development. See your Intel representative for details.

The same websites tell that Intel is going to disable the TSX instructions via a microcode update. I hope it won’t be the case and that they are going to be able to find a microcode fix. Otherwise it would mean I will have to upgrade my desktop computer earlier than expected. It’s a bit expensive to upgrade it every year and that’s a the reason why I skipped the Ivy Bridge generation which didn’t bring a lot from the instructions point of view. Alternatively I can also skip microcode and BIOS updates, in the hope I won’t need another fix from them at some point.

Categories: FLOSS Project Planets

Steinar H. Gunderson: Blenovo part III

Fri, 2014-08-15 08:04

I just had to add this to the saga:

I got an email from Lenovo Germany today, saying they couldn't reach me (and that the case would be closed after two days if I don't contact them back). I last sent them the documents they asked for July 3rd.

I am speechless.

Update, Aug 19: They actually called again today (not closing the case), saying that they had received the required documents and could repair my laptop under the ThinkPad Protection Program. I told him in very short terms what had happened (that Lenovo Norway had needed seven minutes to do what they needed three months for) and that this was the worst customer service experience I'd ever had, and asked them to close the case. He got a bit meek.

Categories: FLOSS Project Planets

Steve Kemp: A tale of two products

Fri, 2014-08-15 07:14

This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.

I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.

This entry is about two things.

A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.

When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..

So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.

These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.

Due to the shiny-white walls of the room they're all you need in the dark.

By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:

  • Our TV.
  • Our Sky-box.
  • OUr DVD player.

The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.

I had to resort to installing Windows in a virtual machine to make it run:

# Get the Bus/ID for the USB device bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0) id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:) # pass to kvm kvm -localtime .. -usb -device usb-host,hostbus=$bus,hostaddr=$id ..

That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.

I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.

There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.

(Apparently there is MacOS software too, I don't use macs. *shrugs*)

In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.

Categories: FLOSS Project Planets

Juliana Louback: JSCommunicator 2.0 (Beta) is Live!

Thu, 2014-08-14 18:06

This is the last week of Google Summer of Code 2014 - all good things must come to an end. To wrap things up, I’ve merged all my work on JSCommunicator into a new version with all the added features. You can now demo the new and improved (or at least so I hope) JSCommunicator on rtc.debian.org!

JSCommunicator 2.0 has an assortment of new add-ons, the most important new features are the Instant Messaging component and the internationalization support.

The UI has been reorganized but we are currently not using a skin for color scheme - will be posting about that in a bit. The idea is to have a more neutral look that can be easily customized and integrated with other web apps.

A chat session is automatically opened when you begin a call with someone - unless you already started a chat session with said someone. Sound alerts for new incoming messages are optional in the config file, visual alerts occur when an inactive chat tab receives a new message. Future work includes multiple user chat sessions and adapting the layout to a large amount of chat tabs. Currently it only handles 6. (Should I allow more? Who chats with more than 6 people at once? 14 year old me would, but now I just can’t handle that. Anyway, I welcome advice on how to go about this. Should we do infinite tabs or if not, what’s the cut-off?)

About internationalization, I’m uber proud to say we currently run in 6 languages! The 6 are English (default), Spanish, French, Portuguese, Hebrew and German. But one thing I must mention is that since I added new stuff to JSCommunicator, some of the new stuff doesn’t have a translation. I took care of the Portuguese translation and Yehuda Korotkin quickly turned in the Hebrew translation, but we are still missing an update for Spanish, French and German. If you can contribute, please do. There are about 10 new labels to translate, you can fix the issue here. Or if you’re short on time, shoot me an email with the translation for what’s on the right side of the ‘=’:

welcome = Welcome,

call = Call

chat = Chat

enter_contact = Enter contact

type_to_chat = type to chat…

start_chat = start chat

me = me

logout = Logout

no_contact = Please enter a contact.

remember_me = Remember me

I’ll merge it myself but I’ll be sure to add you to the authors list.

Categories: FLOSS Project Planets

Gregor Herrmann: RC bugs 2014/13 - 2014/33

Thu, 2014-08-14 15:32

perl 5.20 got uploaded to debian unstable a few minutes ago; be prepared for some glitches when upgrading sid machines/chroots in the next days, while all 557 reverse dependencies are rebuilt via binNMUs.

how does this relate to this blog post's title? it does, since during the last weeks I was mostly trying to help with the preparation of this transition. & we managed to fix quite a few bugs while they were not bumped to serious yet, otherwise the list below would be a bit longer :)

anyway, here are the the RC bugs I've worked on in the last 20 or so weeks:

  • #711614 – src:libscriptalicious-perl: "libscriptalicious-perl: FTBFS with perl 5.18: test hang"
    upload new upstream release (pkg-perl)
  • #711616 – src:libtest-refcount-perl: "libtest-refcount-perl: FTBFS with perl 5.18: test failures"
    build-depend on fixed version (pkg-perl)
  • #719835 – libdevel-findref-perl: "libdevel-findref-perl: crash in XS_Devel__FindRef_find_ on Perl 5.18"
    upload new upstream release (pkg-perl)
  • #720021 – src:libhtml-template-dumper-perl: "libhtml-template-dumper-perl: FTBFS with perl 5.18: test failures"
    mark fragile test as TODO (pgk-perl)
  • #720271 – src:libnet-jabber-perl: "libnet-jabber-perl: FTBFS with perl 5.18: test failures"
    add patch to sort hash (pkg-perl)
  • #726948 – libmath-bigint-perl: "libmath-bigint-perl: uninstallable in sid - obsoleted by perl 5.18"
    upload new upstream release (pkg-perl)
  • #728634 – src:fusesmb: "fusesmb: FTBFS: configure: error: Please install libsmbclient header files."
    finally upload to DELAYED/2 with patch from November (using pkg-config)
  • #730936 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS: Tests errors"
    upload new upstream release (pkg-perl)
  • #737434 – src:libmojomojo-perl: "[src:libmojomojo-perl] Sourceless file (minified)"
    add unminified version of javascript file to source package (pkg-perl)
  • #739505 – libcgi-application-perl: "libcgi-application-perl: CVE-2013-7329: information disclosure flaw"
    upload with patch prepared by carnil (pkg-perl)
  • #739809 – src:libgtk2-perl: "libgtk2-perl: FTBFS: Test failure"
    add patch from Colin Watson (pkg-perl)
  • #743086 – src:libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: Tests failures"
    add patch from CPAN RT (pkg-perl)
  • #743099 – src:libclass-refresh-perl: "libclass-refresh-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #745792 – encfs: "[PATCH] Fixing FTBFS on i386 and kfreebsd-i386"
    use DEB_HOST_MULTIARCH to find libraries, upload to DELAYED/2
  • #746148 – src:redshift: "redshift: FTBFS: configure: error: missing dependencies for VidMode method"
    add missing build dependency, upload to DELAYED/2
  • #747771 – src:bti: "bti: FTBFS: configure: line 3571: syntax error near unexpected token `PKG_CHECK_MODULES'"
    add missing build dependency
  • #748996 – libgd-securityimage-perl: "libgd-securityimage-perl: should switch to use libgd-perl"
    update (build) dependency (pkg-perl)
  • #749509 – src:visualvm: "visualvm: FTBFS: debian/visualvm/...: Directory nonexistent"
    use override_dh_install-indep in debian/rules (pkg-java)
  • #749825 – src:libtime-parsedate-perl: "libtime-parsedate-perl: trying to overwrite '/usr/share/man/man3/Time::ParseDate.3pm.gz', which is also in package libtime-modules-perl 2011.0517-1"
    add missing Breaks/Replaces (pkg-perl)
  • #749938 – libnet-ssh2-perl: "libnet-ssh2-perl: FTBFS: libgcrypt20 vs. libcrypt11"
    upload package with fixed build-dep, prepared by Daniel Lintott (pkg-perl)
  • #750276 – libhttp-async-perl: "libhttp-async-perl: FTBFS: Tests failures"
    upload new upstream release prepared by Daniel Lintott (pkg-perl)
  • #750283 – src:xacobeo: "xacobeo: FTBFS: Tests failures when network is accessible"
    add missing build dependency (pkg-perl)
  • #750305 – src:libmoosex-app-cmd-perl: "libmoosex-app-cmd-perl: FTBFS: Tests failures"
    add patch to fix test regexps (pkg-perl)
  • #750325 – src:libtemplate-plugin-latex-perl: "libtemplate-plugin-latex-perl: FTBFS: Tests failures"
    upload new upstream releases prepared by Robert James Clay (pkg-perl)
  • #750341 – src:cpanminus: "cpanminus: FTBFS: Trying to write outside builddir"
    set HOME for tests (pkg-perl)
  • #750564 – obexftp: "missing license in debian/copyright"
    add missing license to debian/copyright, QA upload
  • #750770 – libsereal-decoder-perl: "libsereal-decoder-perl: FTBFS on various architectures"
    upload new upstream development release (pkg-perl)
  • #751044 – packaging-tutorial: "packaging-tutorial: FTBFS - File `bxcjkjatype.sty' not found."
    send a patch (updated build-depends) to the BTS
  • #751563 – src:tuxguitar: "tuxguitar: depends on xulrunner which is no more"
    do some triaging (pkg-java)
  • #752171 – src:pcp: "pcp: Build depends on autoconf"
    upload NMU prepared by Xilin Sun, adding missing build dependency
  • #752347 – highlight: "highlight: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752349 – src:nflog-bindings: "nflog-bindings: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752469 – clearsilver: "clearsilver: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752470 – ekg2: "ekg2: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752472 – fwknop: "fwknop: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752476 – handlersocket: "handlersocket: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, QA upload
  • #752704 – lcgdm: "lcgdm: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752705 – libbuffy-bindings: "libbuffy-bindings: hardcodes /usr/lib/perl5"
    pass value of $Config{vendorarch} to dh_install in debian/rules, upload to DELAYED/5
  • #752710 – liboping: "liboping: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752714 – lockdev: "lockdev: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752716 – ming: "ming: hardcodes /usr/lib/perl5"
    NMU with the minimal changes from the next release
  • #752799 – obexftp: "obexftp: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752810 – src:razor: "razor: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752812 – src:redland-bindings: "redland-bindings: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752815 – src:stfl: "stfl: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752924 – libdbix-class-perl: "libdbix-class-perl: FTBFS: Failed test 'Cascading delete on Ordered has_many works'"
    add patch from upstream git (pkg-perl)
  • #752928 – libencode-arabic-perl: "libencode-arabic-perl: FTBFS with newer Encode: Can't locate object method "export_to_level" via package "Encode""
    add patch from Niko Tyni (pkg-perl)
  • #752982 – src:libwebservice-musicbrainz-perl: "libwebservice-musicbrainz-perl: hardcodes /usr/lib/perl5"
    pass create_packlist=0 to Build.PL, upload to DELAYED/5
  • #752988 – libnet-dns-resolver-programmable-perl: "libnet-dns-resolver-programmable-perl: broken with newer Net::DNS"
    add patch from CPAN RT (pkg-perl)
  • #752989 – libio-callback-perl: "libio-callback-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753026 – libje-perl: "libje-perl: FTBFS with Perl 5.20: test failures"
    upload new upstream release (pkg-perl)
  • #753038 – libplack-test-anyevent-perl: "libplack-test-anyevent-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753057 – libinline-java-perl: "libinline-java-perl: broken symlinks when built under perl 5.20"
    fix symlinks to differing paths in perl 5.18 vs. 5.20 (pkg-perl)
  • #753144 – src:net-snmp: "net-snmp: FTBFS on kfreebsd-amd64 - 'struct kinfo_proc' has no member named 'kp_eproc'"
    add patch from Niko Tyni, upload to DELAYED/5, later reschedules to 0-day with maintainer's approval
  • #753214 – src:license-reconcile: "license-reconcile: FTBFS: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #753237 – src:libcgi-application-plugin-ajaxupload-perl: "libcgi-application-plugin-ajaxupload-perl: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #754125 – libimager-perl: "libimager-perl: FTBFS on s390x"
    close bug, package builds again after libpng upload (pkg-perl)
  • #754691 – src:libio-interface-perl: "libio-interface-perl: FTBFS on kfreebsd-*: invalid storage class for function 'XS_IO__Interface_if_flags'"
    add patch which adds a missing } (pkg-perl)
  • #754993 – libdevice-usb-perl: "libdevice-usb-perl: FTBFS with newer Inline(::C)"
    workaround an Inline bug in debian/rules
  • #755028 – src:libtk-tablematrix-perl: "libtk-tablematrix-perl: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #755324 – src:pinto: "pinto: FTBFS: Tests failures"
    add patch to "use" required module (pkg-perl)
  • #755332 – src:libdevel-nytprof-perl: "libdevel-nytprof-perl: FTBFS: Tests failures"
    mark failing tests temporarily as TODO (pkg-perl)
  • #757754 – obexftp: "obexftp: FTBFS: format not a string literal and no format arguments [-Werror=format-security]"
    add patch with format argument, QA upload
  • #757774 – src:libwx-glcanvas-perl: "libwx-glcanvas-perl: hardcodes /usr/lib/perl5"
    build-depend on new libwx-perl (pkg-perl)
  • #757855 – libwx-perl: "libwx-perl: embeds exact wxWidgets version, needs stricter dependencies"
    use virtual package provided by alien-wxwidgets (pkg-perl)
  • #758127 – src:libwx-perl: "libwx-perl: FTBFS on arm*"
    report and try to debug new build failure (pkg-perl)

p.s.: & now, go & enjoy the new perl 5.20 features :)

Categories: FLOSS Project Planets

Hideki Yamane: New Debian T-shirts (2014 summer)

Thu, 2014-08-14 03:56
For these every 4 or 5 years, Jun Nogata made Debian T-shirts and today I got a 2014 summer version (thanks!  :-), looks good.


I'll take 2 or 3 Japanese Large-size one to DebConf14 in Portland. Please let me know if you want it. (Update: all T-shirts are reserved now, thanks)

Categories: FLOSS Project Planets

Daniel Pocock: Bug tracker or trouble ticket system?

Thu, 2014-08-14 00:04

One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.

There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.

Support request or bug?

One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.

Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".

At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.

Will people use it?

Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.

Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.

Beyond Perl

Some of the most well known systems in this space are Bugzilla, Request Tracker and OTRS. All of these solutions are developed in Perl.

These days, Python, JavaScript/Node.JS and Java have taken more market share and Perl is chosen less frequently for new projects. Perl skills are declining and younger developers have usually encountered Python as their main scripting language at university.

My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.

Bugzilla has fallen out of the Debian and Ubuntu distributions after squeeze due to its complexity. In contrast, Fedora carries the Bugzilla packages and also uses it as their main bug tracker.

Evaluation

I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.

Some of the trends that appear:

  • Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
  • A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
  • Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
  • Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.
Questions

This leaves me with some of the following questions:

  • Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
  • For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
  • Which are more extendable with modern programming practices, such as Python scripting and using Git?
  • Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
  • Which of them are suitable for the public internet and which are only considered suitable for private access?
Categories: FLOSS Project Planets

Ian Donnelly: The New Deal: ucf Integration

Wed, 2014-08-13 14:29

 

Hi Everybody,

A few days ago I posted an entry on this blog called dpkg Woes where I explained that due to a lack of response, we were abandoning our plan to patch dpkg for my Google Summer of Code project, and I explained that we had a new solution. Well today I would like to tell you about that solution. Instead of patching dpkg, which would take a long time and seemed like it would never make it upstream, we have added some new features to ucf which will allow my Google Summer of Code project to be realized.

If you don’t know, ucf, which stands for Update Configuration File, is a popular Debian package whose goal is to “preserve user changes to config files.” It is meant to act as an alternative to considering a configuration file a conffile on systems that use dpkg. Instead, package maintainers can use ucf to handle these files in a conffile-like way. Where conffiles must work on all systems, because they are shipped with the package, configuration files that use ucf can be handled by maintainer scripts and can vary between systems. ucf exists as a script that allows conffile-like handling of non-conffile configuration files and allows much more flexibility than dpkg’s conffile system. In fact, ucf even includes an option to perform a three-way merge on files it manages, it currently only uses diff3 for the task though.

As you can see, ucf has a goal that while different than ours, seems naturally compatible to our goal of automatic conffile merging. Obviously, since ucf is a different tool than dpkg we had to re-think how we were going to integrate with ucf. Luckily, integration with ucf proved to be much more simple than integration with dpkg. All we had to do to integrate with ucf was to add a generic hook to attempt a three way merge using any tool created for the task such as Elektra and kdb merge. Felix submitted a pull request with the exact code almost a week ago and we have talked with Manoj Srivastava, the developer for ucf, and he seemed to really like the idea. The only changes we made are to add an option for a three-way merge command, and if one is present, the merge is attempted using the specified command. It’s all pretty simple really.

Since we decided to include a generic hook for a three-way merge command instead of an Elektra-specific one (which would be less open and would create a dependency on Elektra), we also had to add functionality to Elektra to work with this hook. We ended up writing a new script, called elektra-merge which is now included in our repository. All this script does is act as a liaison between the ucf --three-way-merge-command option and Elektra itself. The script automatically mounts the correct files for theirs and base and dest using the new remount command.

Since the only parameters that are passed to the ucf merge command are the paths of ours, theirs, base and result, we were missing vital information on how to mount these files. Our solution was to create the remount command which mirrors the backend configuration of an existing mountpoint to create a new mountpoint using a new file. So if ours is mounted to system/ours using ini, kdb remount /etc/theirs system/theirs system/ours will mount /etc/theirs to system/theirs using the same backend as ours. Since theirs, base, and result should all have the same backend as ours, we can use remount to mount these files even if all we know is their paths.

Now, package maintainers can edit their scripts to utilize this new feature. If they want, package maintainers can specify a command to use to merge files using ucf during package upgrades. I will soon be posting a tutorial about how to integrate this feature into a package and how to use Elektra in your scripts in order to allow for automatic three-way merges during package upgrade. I will post a link to the tutorial here once it is published.

Sincerely,
Ian S. Donnelly

Categories: FLOSS Project Planets

Richard Hartmann: Slave New World

Wed, 2014-08-13 13:39

Ubiquitous surveillance is a given these days, and I am not commenting on the crime or the level of stupidity of the murderer, but the fact that the iPhone even logs when you turn your flashlight on and off is scary.

Very, very scary in all its myriad of implications.

But at least it's not as if both your phone and your carrier wouldn't log your every move anyway.

Because Enhanced 911 and its ability to silently tell the authorities your position was not enough :)

Categories: FLOSS Project Planets

Daniel Pocock: WebRTC in CRM/ERP solutions at xTupleCon 2014

Wed, 2014-08-13 13:29

In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.

Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.

xTupleCon discounts for developers

xTuple has advised that they will offer a discount to other open source developers and contributers who wish to attend any part of their event. For details, please contact xTuple directly through this form. Please note it is getting close to their deadline for registration and discounted hotel bookings.

Potential WebRTC / JavaScript meet-up in Norfolk area

For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.

Categories: FLOSS Project Planets

Riku Voipio: Booting Linaro ARMv8 OE images with Qemu

Wed, 2014-08-13 09:36
A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:
$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench: Index Qemu Foundation Host Memory 4.294 0.712 44.534 Integer 6.270 0.686 41.983 Float 1.463 1.065 59.528Baseline (LINUX) : AMD K6/233* Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.
Categories: FLOSS Project Planets

Ian Donnelly: How-To: kdb import

Tue, 2014-08-12 15:29

Hi everybody,

Today I wanted to go over what I think is a very useful command in the kdb tool, kdb import. As you know, the kdb tool allows users to interact with the Elektra Key Database (KDB) via the command line. Today I would like to explain the import function of kdb.

The command to use kdb import is:

kdb import [options] destination [format]

In this command, destination is where the imported Keys should be stored below. For instance, kdb import system/imported would store all the keys below system/imported. This command takes Keys from stdin to store them into KDB. Typically, this command is used with a pipe to read in the Keys from a file.

The format argument you see above can be a very powerful option to use with kdb import. The format argument allows a user to specify which plug-in is used to import the Keys into the Key Database. The user can specify any storage plug-in to serve as the format for the Keys to be imported. For instance, if a user wanted to import a /etc/hosts file into KDB without mounting it, they could use the command cat /etc/hosts | kdb import system/hosts hosts. This command would essentially copy the current hosts file into KDB, like mounting it. Unlike mounting it, changes to the Keys would not be reflected in the hosts file and vise versa.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

It is very important to note that the dump does not rename keys by design. If a user exports a KeySet using dump using a command such as kdb export system/backup > backup.ecf, they can only import that keyset back into system/backup using a command like cat backup.ecf | kdb import system/backup.

The kdb import command only takes one special option:

-s --strategy

which is used to specify a strategy to use if Keys already exist in the specified destination.
The current list of strategies are:

preserve any keys already in the destination will not be overwritten
overwrite any keys already in the destination will be overwritten if a new key has the same name
cut all keys already in the destination will be removed, then new keys will be imported

If no strategy is specified, the command defaults to the preserve strategy as to not be destructive to any previous keys.

An example of using the kdb import is as follows:

cat backup.ecf | kdb import system/restore

This command would import all keys stored in the file backup.ecf into the Key Database under system/restore.

In this example, backup.ecf was exported from the KeySet using the dump format by using the command:
kdb export system/backup > backup.ecf

backup.ecf contains all the information about the keys below system/backup:

$cat backup.ecf
kdbOpen 1
ksNew 3
keyNew 19 0
system/backup/key1
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key2
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key3
keyMeta 7 1
binary
keyEnd
ksEnd

Before the import command, system/backup does not exists and no keys are contained there.
After the import command, running the command kdb ls system/backup prints:

system/backup/key1
system/backup/key2
system/backup/key3

As you can see, the kdb import command is a very useful tool included as part of Elektra. I also wrote a tutorial on the kdb export command. Please go read that as well because those two commands go hand in hand and allow some very powerful usage of Elektra.

Sincerely,
Ian S. Donnelly

Categories: FLOSS Project Planets

Ian Donnelly: How-To: kdb export

Tue, 2014-08-12 15:29

Hi everybody,

I recently posted a tutorial on the kdb import command. Well I also wanted to go over it’s sibling function, kdb export. These two commands work very similarly, but there are some differences.

First of all, the command to use kdb export is:

kdb export [options] source [format]

In this command, source is the root key of which Keys should be exported. For instance, kdb export system/export would export all the keys below system/export. Additionally, this command exports keys under the system/elektra directory by default. It does this so that information about the keys stored under this directory will be included if the Keys are later imported into an Elektra Key Database. This command export Keys to stdout to store them into the Elektra Key Database. Typically, the export command is used with redirection to write the Keys to a file.

As we discussed already, the format argument can be a very powerful option to use with kdb export. Just like with kdb import the format argument allows a user to specify which plug-in is used to export the Keys from the Key Database. The user can specify any storage plug-in to serve as the format for the exported Keys. For instance, if a user mounted their hosts file to system/hosts using kdb mount /etc/hosts system/hosts hosts they would be able to export these Keys using the hosts format by using the command kdb export system/hosts hosts > hosts.ecf. This command would essentially create a backup of their current /etc/hosts file in a valid format for /etc/hosts.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

The kdb export command takes one special option, but it’s different than the one for kdb import, that option is:

-E --without-elektra which omits the system/elektra directory of keys

An example of using the kdb export is as follows:

kdb export system/backup > backup.ecf

This command would export all keys stored under system/backup, along with relevant Keys in system/elektra, into a file called backup.ecf.

As you can see, the kdb export command is a very useful tool just like it’s sibling, kdb import. If you haven’t yet, please go read the tutorial I wrote for kdb import because these two commands are best used together and can enable some really great features of Elektra.

Sincerely,
Ian S. Donnelly

Categories: FLOSS Project Planets

Cyril Brulebois: Mark a mail as read across maildirs

Mon, 2014-08-11 13:20
Problem

Discussions are sometimes started by mailing a few different mailing lists so that all relevant parties have a chance to be aware of a new topic. It’s all nice when people can agree on a single venue to send their replies to, but that doesn’t happen every time.

Case in point, I’m getting 5 copies of a bunch of mails, through the following debian-* lists: accessibility, boot, cd, devel, project.

Needless to say: Reading, or marking a given mail as read once per maildir rapidly becomes a burden.

Solution

I know some people use a duplicate killer at procmail time (hello gregor) but I’d rather keep all mails in their relevant maildirs.

So here’s mark-read-everywhere.pl which seems to do the job just fine for my particular setup: all maildirs below ~/mails/* with the usual cur, new, tmp subdirectories.

Basically, given a mail piped from mutt, compute a hash on various headers, look at all new mails (new subdirectories), and mark the matching ones as read (move to the nearby cur subdirectories, and change suffix from , to ,S).

Mutt key binding (where X is short for cross post):

macro index X "<pipe-message>~/bin/mark-as-read-everywhere.pl<enter>"

This isn’t pretty or bulletproof but it already started saving time!

Now to wonder: was it worth the time to automate that?

Categories: FLOSS Project Planets

Cyril Brulebois: How to serve Perl source files

Mon, 2014-08-11 12:45

I noticed a while ago a Perl script file included on my blog wasn’t served properly, since the charset wasn’t announced and web browsers didn’t display it properly. The received file was still valid UTF-8 (hello, little © character), at least!

First, wrong intuition

Reading Apache’s /etc/apache2/conf.d/charset it looks like the following directive might help:

AddDefaultCharset UTF-8

but comments there suggest reading the documentation! And indeed that alone isn’t sufficient since this would only affect text/plain and text/html. The above directive would have to be combined with something like this in /etc/apache2/mods-enabled/mime.conf:

AddType text/plain .pl Real solution

To avoid any side effects on other file types, the easiest way forward seems to avoid setting AddDefaultCharset and to associate the UTF-8 charset with .pl files instead, keeping the text/x-perl MIME type, with this single directive (again in /etc/apache2/mods-enabled/mime.conf):

AddCharset UTF-8 .pl

Looking at response headers (wget -d) we’re moving from:

Content-Type: text/x-perl

to:

Content-Type: text/x-perl; charset=utf-8 Conclusion

Nothing really interesting, or new. Just a small reminder that tweaking options too hastily is sometimes a bad idea. In other news, another Perl script is coming up soon. :)

Categories: FLOSS Project Planets

Juliana Louback: JSCommunicator - Setup and Architecture

Mon, 2014-08-11 10:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [ { server:"turn:debrtc.org" } ],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: { servers: 'wss://ws.sip5060.net', connection_recovery_min_interval: 2, connection_recovery_max_interval: 30, },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Architecture

Disclaimer: I’m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it’s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father’s stamp of approval, I’ll be sure to confirm the accuracy.

Now to explain how the JSCommunicator code interacts, the use of each code ‘item’ is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I’m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following:

  • config.js
  • jssip-helper.js
  • parseuri.js
  • webrtc-check.js
  • JSCommUI.js
  • JSCommManager.js
  • make-release
  • init.js
  • Arbiter.js
  • JsSIP.js

Each of these files will be presented in what I hope is an intuitive order.

  • config.js - As expected, this file contains your custom configuration specifications, i.e. the servers being used to direct traffic, authentication credentials, and a series of properties to enable/disable optional functionalities in JSCommunicator.

The next three files could be considered ‘utils’:

  • jssip-helper.js - This will load the data from config.js necessary to run JSCommunicator, such as configurations relating to servers, websockets, connection specifications (timeout, recovery intervals), and user credentials. Properties for optional features are ignored of course.

  • parseuri.js - Contains a function to segregate data from a URI.

  • webrtc-check.js - Verifies if browser is WebRTC compatible by checking if it’s possible to enable camera and microphone access.

These two are where the magic happens:

  • JSCommUI.js - Responsible for the UI component, controling what becomes visible to the user, audio effects, client-side error handling, and gathering the data that will be fed to JSCommManager.js.

  • JSCommManager.js - Initializes the SIP User Agent to manage the session including (but not limited to) beginning/terminating the session, sending/receiving SIP messages and calls, and signaling the state of the session and other important notifications such as incoming calls, messages received, etc.

Now for some extras:

  • make-release - Combines the main .js files into one big file. Gets jssip-helper.js, parseuri.js, webrtc-check.js, JSCommUI.js and JSCommManager.js and spits out a new file JSComm.js with all that content. Now you understand why phone-dev.shtml included each of the 5 files mentioned above whereas phone.shmtl includes only JSComm.js which didn’t exist until you ran make-release. That confused me a bit.

  • init.js - On page load, calls JSCommManager.js’ init function, which eventually calls JSCommUI.js’ init function. In other words, starts up the JSCommunicator app. I guess it’s only used to show how to start up the app. This could be done directly on the page you’re embedding JSCommunicator from. So maybe not entirely needed.

Third party code:

  • Arbiter.js - Javascript implmentation of the publish/subscribe patten, written by Matt Kruse. In JSCommunicator it’s used in JSCommManager.js to publish signals to describe direct the app’s behavior. For example, JSCommManager will publish a message indicating that the user received a call from a certain URI. In event-demo.js we subscribe to this kind of message and when said message is received, an action can be performed such as adding to the app’s call history. Very cool.

  • JsSIP.js - Implements the SIP WebSocket transport in Javascript. This ensures the transportantion of data is done in adherence to the WebSocket protocol. In JSCommManager.js we initialize a SIP User Agent based in the implementation in JsSIP.js. The User Agent will ‘translate’ all of the JSCommunicator actions into SIP WebSocket format. For example, when sending an IM, the JSCommunicator app will collect the essential data such as say the origin URI, destination URI and an IM message, while the User Agent is in charge of formating the data so that it can be transported in a SIP message unit. A SIP message contains a lot more information than just the sender, receiver and message. Of course, a lot of the info in a SIP message is irrelevant to the user and in JSCommUI.js we filter through all that data and only display what the user needs to see.

Here’s a diagram of sorts to help you visualize how the code interacts:

In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three ‘utils’, parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js.

When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

Categories: FLOSS Project Planets

Juliana Louback: JSCommunicator - Setup and Architecture

Mon, 2014-08-11 10:06

Preface

During Google Summer of Code 2014, I got to work on the Debian WebRTC portal under the mentorship of Daniel Pocock. Now I had every intention of meticulously documenting my progress at each step of development in this blog, but I was a bit late in getting the blog up and running. I’ll now be publishing a series of posts to recap all I’ve done during GSoC. Better late than never.

Intro

JSCommunicator is a SIP communication tool developed in HTML and JavaScript. The code was designed to make integrating JSCommunicator with a website or web app as simple as possible. It’s quite easy, really. However, I do think a more detailed explanation on how to set things up and a guide to the JSCommunicator architecture could be of use, particularly for those wanting to modify the code in any way.

Setup

To begin, please fork the JSCommunicator repo.

If you are new to git, feel free to follow the steps in section “Setup” and “Clone” in this post.

If you read the README file (which you always should), you’ll see that JSCommunicator needs a SIP proxy that supports SIP over Websockets transport. Some options are:

I didn’t find a tutorial for Kamailio setup, I did find one for repro setup. And bonus, here you have a great tutorial on how to setup and configure your SIP proxy AND your TURN server.

In your project’s root directory, you’ll see a file named config-sample.js. Make a copy of that file named config.js. The config-sample.js file has comments that are very instructive. In sum, the only thing you have to modify is the turn_servers property and the websocket property. In my case, debrtc.org was the domain registered for testing my project, so my config file has:

turn_servers: [ { server:"turn:debrtc.org" } ],

Note that unlike the sample, I didn’t set a username and password so SIP credentials will be used.

Now fill in the websocket property – here we use sip5060.net.

websocket: { servers: 'wss://ws.sip5060.net', connection_recovery_min_interval: 2, connection_recovery_max_interval: 30, },

I’m pretty sure you can set the connection_recovery properties to whatever you like. Everything else is optional. If you set the user property, specifically display_name and uri, that will be used to fill in the Login form and takes preference over any ‘Remember me’ data. If you also set sip_auth_password, JSCommunicator will automatically log in.

All the other properties are for other optional functionalities and are well explained.

You’ll need some third-party javascript that is not included in the JSCommunicator git repo. Namely jQuery version 1.4 or higher and ArbiterJS version 1.0. Download jQuery here and ArbiterJS here and place the .js files in your project’s root directory. Do make sure that you are including the correct filename in your html file. For example, in phone-dev.shtml, a file named jquery.js is included. The file you downloaded will likely have version numbers in it’s file name. So rename the downloaded file or change the content of src in your includes. This is uber trivial, but I’ve made the mistake several times.

You’ll also need JsSIP.js which can be downloaded here. Same naming care must be taken as is the case for jQuery and ArbiterJS. The recently added Instant Messaging component and some of the new features need jQuery-UI - so far version 1.11.* is known to work. From the downloaded .zip all you need is the jquery-ui-...js file and the jquery-ui-...css file, also to be placed in the project’s root directory. If you’ll be using the internationalization support you’ll also need jquery.i18n.properties.js.

To try out JSCommunicator, deploy the website locally by copying your project directory to the apache document root directory (provided you are using apache, which is a good idea.). You’ll likely have to restart your server before this works. Now the demo .shtml pages only have a header with all the necessary includes, then a Server Side Include for the body of the page, with all the JSCommunicator html. The html content is in the file jscommunicator.inc. You can enable SSI on apache, OR you can simply copy and paste the content of jscommunicator.inc into phone-dev.shmtl. Start up apache, open a browser and navigate to localhost/jscommunicator/phone-dev.shmtl and you should see:

Actually, pending updates to JSCommunicator, you should see a brand new UI! But all the core stuff will be the same.

Architecture

Disclaimer: I’m explaining my view of the JSCommunicator architecture, which currently may not be 100% correct. But so far it’s been enough for me to make my modifications and additions to the code, so it could be of use. One I get a JSCommunicator Founding Father’s stamp of approval, I’ll be sure to confirm the accuracy.

Now to explain how the JSCommunicator code interacts, the use of each code ‘item’ is described, ignoring all the html and css which will vary according to how you choose to use JSCommunicator. I’m also not going to explain jQuery which is a dependency but not specific to WebRTC. The core JSCommunicator code is the following:

  • config.js
  • jssip-helper.js
  • parseuri.js
  • webrtc-check.js
  • JSCommUI.js
  • JSCommManager.js
  • make-release
  • init.js
  • Arbiter.js
  • JsSIP.js

Each of these files will be presented in what I hope is an intuitive order.

  • config.js - As expected, this file contains your custom configuration specifications, i.e. the servers being used to direct traffic, authentication credentials, and a series of properties to enable/disable optional functionalities in JSCommunicator.

The next three files could be considered ‘utils’:

  • jssip-helper.js - This will load the data from config.js necessary to run JSCommunicator, such as configurations relating to servers, websockets, connection specifications (timeout, recovery intervals), and user credentials. Properties for optional features are ignored of course.

  • parseuri.js - Contains a function to segregate data from a URI.

  • webrtc-check.js - Verifies if browser is WebRTC compatible by checking if it’s possible to enable camera and microphone access.

These two are where the magic happens:

  • JSCommUI.js - Responsible for the UI component, controling what becomes visible to the user, audio effects, client-side error handling, and gathering the data that will be fed to JSCommManager.js.

  • JSCommManager.js - Initializes the SIP User Agent to manage the session including (but not limited to) beginning/terminating the session, sending/receiving SIP messages and calls, and signaling the state of the session and other important notifications such as incoming calls, messages received, etc.

Now for some extras:

  • make-release - Combines the main .js files into one big file. Gets jssip-helper.js, parseuri.js, webrtc-check.js, JSCommUI.js and JSCommManager.js and spits out a new file JSComm.js with all that content. Now you understand why phone-dev.shtml included each of the 5 files mentioned above whereas phone.shmtl includes only JSComm.js which didn’t exist until you ran make-release. That confused me a bit.

  • init.js - On page load, calls JSCommManager.js’ init function, which eventually calls JSCommUI.js’ init function. In other words, starts up the JSCommunicator app. I guess it’s only used to show how to start up the app. This could be done directly on the page you’re embedding JSCommunicator from. So maybe not entirely needed.

Third party code:

  • Arbiter.js - Javascript implmentation of the publish/subscribe patten, written by Matt Kruse. In JSCommunicator it’s used in JSCommManager.js to publish signals to describe direct the app’s behavior. For example, JSCommManager will publish a message indicating that the user received a call from a certain URI. In event-demo.js we subscribe to this kind of message and when said message is received, an action can be performed such as adding to the app’s call history. Very cool.

  • JsSIP.js - Implements the SIP WebSocket transport in Javascript. This ensures the transportantion of data is done in adherence to the WebSocket protocol. In JSCommManager.js we initialize a SIP User Agent based in the implementation in JsSIP.js. The User Agent will ‘translate’ all of the JSCommunicator actions into SIP WebSocket format. For example, when sending an IM, the JSCommunicator app will collect the essential data such as say the origin URI, destination URI and an IM message, while the User Agent is in charge of formating the data so that it can be transported in a SIP message unit. A SIP message contains a lot more information than just the sender, receiver and message. Of course, a lot of the info in a SIP message is irrelevant to the user and in JSCommUI.js we filter through all that data and only display what the user needs to see.

Here’s a diagram of sorts to help you visualize how the code interacts:

In sum, 1 - JSCommUI.js handles what is displayed in the UI and feeds data to JSCommManager.js; 2 - JSCommManager.js actually does stuff, feeding data to be displayed to JSCommUI.js; 3 - JSCommManager.js calls functions from the three ‘utils’, parseuri.js, webrtc-check.js and jssip-helpher.js which organizes the data from config.js; 4 - JSCommManager.js initializes a SIP User Agent based on the implementation in Arbiter.js.

When making any changes to JSCommunicator, you will likely only be working with JSCommUI.js and JSCommManager.js.

Categories: FLOSS Project Planets