FLOSS Project Planets
Race conditions, and errors at startup, seem to be particularly problematic
There are also some other talks related to Ceph available:
- Ceph and OpenStack: current integration and roadmap (Josh Durgin, Sébastien Han)
- Keeping OpenStack storage trendy with Ceph and containers (Sage Weil)
- Ceph at CERN: A Year in the Life of a Petabyte-Scale Block Storage Service (Dan van der Ster)
- Swift vs Ceph from an architectural standpoint (Christian Huebner)
- A Year with Cinder and Ceph at TWC (Craig Delatte, Bryan Stillwell)
- Building Your First Ceph Cluster for OpenStack - Fighting for Performance, Solving Tradeoffs (Gregory Elkinbard, Dmitriy Novakovskiy)
See you in Vancouver!
Happy Friday everyone,
In today’s blog post I’m going to cover some basic principles and features in PyCharm that make Python remote development easy as pie. To demonstrate them I’ll use a very simple flask web application from the official flask github repository. Enjoy the read!
First I clone the official flask repository from https://github.com/mitsuhiko/flask. Then from the PyCharm’s Welcome screen I open the blueprintexample directory, which stores the source of the super simple flask application I’m going to use for the demo:
PyCharm opens the directory and creates a project based on it:
Now I’m going to set up the remote machine to start with the remote development. I use Vagrant which PyCharm offers great support for. In one of my previous blog posts I already covered Vagrant integration, so here are just the straight steps to provision and run a VM. I go to Tools | Vagrant | Init in Project Root and select the Ubuntu 14.04 image which was previously imported from the collection of vagrant boxes. This creates the Vagrantfile in the project root. Now, I’m going to edit this file to configure a private network and make my VM visible from my host machine:
Next, I run the VM with Tools | Vagrant | Up and PyCharm shows me that the VM is up and running:
We can open a local terminal inside PyCharm to test the VM:
Alright, the VM responses to ping. Now, I want to run my web application on the VM, so I need to copy my project sources to the remote host. This is easily done with the Deployment tool inside PyCharm.
I go to Tools | Deployment | Configuration and specify the connection details for my VM:
On the Mappings tab in the same window I specify the path mapping rule:
In my case I want my current local project directory blueprintexampe to be mapped to remote /home/vagrant/blueprintremote.
Now I can right-click my project in the project view and select Upload to:
And this will upload my project to the specified directory on the remote machine:
One of the handiest features is that you can set up automatic upload to the remote machine by simply clicking Tools | Deployment | Automatic Upload:
From this point on, all changes made locally will be uploaded to the remote machine automatically, so you don’t need to worry about having fresh sources on the remote host. Cool, isn’t it?
So now, I’m going to modify one of the files in my project, so the flask application will be visible remotely (adding host=’0.0.0.0’ as a parameter to the app.run() ), and PyCharm automatically uploads the changes to the remote machine:
Next, I specify the python interpreter to be used for my project. I go to File | Settings (Preferences for Mac OS) | Project | Project Interpreter. By default, PyCharm sets the local Python interpreter as a project interpreter, so I’ll change it to the remote one:
As I’ve already created a deployment configuration, PyCharm offers to export Python interpreter settings from the existing deployment configuration:
But I can also specify the remote interpreter manually, using SSH credentials or a Vagrant configuration. Here I’ll do it manually:
After I specify the new remote python interpreter, PyCharm starts indexing and finds that the flask package is not installed on the project interpreter:
I can fix this easily with Alt + Enter on the unresolved reference error highlighted with red:
Alright. Now everything is ok, so we can finally specify Run/Debug configuration and launch our application. Let’s go to Run | Edit Configurations and add a new Python run/debug configuration:
In the Run/Debug configuration dialog, I specify the name for my new configuration and the script to be executed on the remote host. PyCharm sets the project interpreter (remote in this case) by default for this new run configuration, and finally I need to specify path mappings for this particular run configuration:
It seems we’re all set. I click the Run button:
PyCharm shows that the application is up and running on port 5000 on the VM.
I open the browser to check that the application is really working:
From this point on, we can work with this project like with a normal local project. PyCharm takes care of uploading any changes to the remote machine and keeps the VM up and running.
With the same Run/Debug configuration, we can do a simple remote debug session putting a few breakpoints right in the editor:
Click the debug button or go to Run | Debug:
That’s it! Hopefully you’ll appreciate this remote development functionality in PyCharm that makes Python remote development a breeze.
If you’re still craving for more details on PyCharm remote development capabilities, as well as other remote development features, please see the online help.
Talk to you next week,
These recent publications would suggest that the time has finally come to deploy serious test tools for bullet-proofing large scale distributed systems.
- Challenges in Designing at Scale: Formal Methods in Building Robust Distributed SystemsPlusCal and TLA+ have proven very effective at establishing and maintaining the correctness through change of the fundamental components on which DynamoDB is based.
- Using TLA+ for teaching distributed systemsTLA is a tool for specifying distributed algorithms/protocols and model checking them.
- My experience with using TLA+ in distributed systems classIntegrating TLA+ to the class gave students a way to get a hands-on experience in algorithms design and correctness verification.
- Combining static model checking with dynamic enforcement using the Statecall Policy LanguageThe Statecall Policy Language (SPL) was designed to make model checking more accessible to regular programmers.
- SAMC: Semantic-aware model checking for fast discovery of deep bugs in cloud systemsIt was used to test 10 versions(both old and current) of Zookeeper, Hadoop/YARN, and Cassandra across 7 different protocols (leader election, atomic broadcast, cluster management, speculative execution, read/write, hinted handoff, and gossiper).
- Lineage-driven Fault InjectionMOLLY doesn’t blindly explore a state space, instead MOLLY reasons backwards from a successful outcome (hopefully common!), to figure out what might have caused it to fail, and then probes paths that exercise those potential causes.
- Paper review: "Simple Testing Can Prevent Most Critical Failures: An Analysis of Production Failures in Distributed Data-Intensive Systems"Table 5 shows that almost all (98%) of the failures are guaranteed to manifest on no more than 3 nodes, and 84% will manifest on no more than 2 nodes.
The UDD bugs interface currently knows about the following release critical bugs:
- In Total:
155 bugs affecting
- Affecting Jessie:
97 (key packages:
65) That's the number we need to get down to zero
before the release. They can be split in two big categories:
- Affecting Jessie and unstable:
77 (key packages:
51) Those need someone to find a fix, or to finish the
work to upload a fix to unstable:
- 13 bugs are tagged 'patch'. (key packages: 9) Please help by reviewing the patches, and (if you are a DD) by uploading them.
- 4 bugs are marked as done, but still affect unstable. (key packages: 1) This can happen due to missing builds on some architectures, for example. Help investigate!
- 60 bugs are neither tagged patch, nor marked done. (key packages: 41) Help make a first step towards resolution!
- Affecting Jessie only: 20 (key packages: 14) Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
- Affecting Jessie and unstable: 77 (key packages: 51) Those need someone to find a fix, or to finish the work to upload a fix to unstable:
- Affecting Jessie: 97 (key packages: 65) That's the number we need to get down to zero before the release. They can be split in two big categories:
How do we compare to the Squeeze and Wheezy release cycles?Week Squeeze Wheezy Jessie 43 284 (213+71) 468 (332+136) 319 (240+79) 44 261 (201+60) 408 (265+143) 274 (224+50) 45 261 (205+56) 425 (291+134) 295 (229+66) 46 271 (200+71) 401 (258+143) 427 (313+114) 47 283 (209+74) 366 (221+145) 342 (260+82) 48 256 (177+79) 378 (230+148) 274 (189+85) 49 256 (180+76) 360 (216+155) 226 (147+79) 50 204 (148+56) 339 (195+144) ??? 51 178 (124+54) 323 (190+133) 189 (134+55) 52 115 (78+37) 289 (190+99) 147 (112+35) 1 93 (60+33) 287 (171+116) 140 (104+36) 2 82 (46+36) 271 (162+109) 157 (124+33) 3 25 (15+10) 249 (165+84) 172 (128+44) 4 14 (8+6) 244 (176+68) 187 (132+55) 5 2 (0+2) 224 (132+92) 175 (124+51) 6 release! 212 (129+83) 161 (109+52) 7 release+1 194 (128+66) 147 (106+41) 8 release+2 206 (144+62) 147 (96+51) 9 release+3 174 (105+69) 152 (101+51) 10 release+4 120 (72+48) 112 (82+30) 11 release+5 115 (74+41) 97 (68+29) 12 release+6 93 (47+46) 87 (71+16) 13 release+7 50 (24+26) 97 (77+20) 14 release+8 51 (32+19) 15 release+9 39 (32+7) 16 release+10 20 (12+8) 17 release+11 24 (19+5) 18 release+12 2 (2+0)
Effective March 6, 2013, I am shutting down the EasyGui project.
The EasyGui software will continue to be available at its current location, but I will no longer be supporting, maintaining, or enhancing it.
The reasons for this decision are personal, and not very interesting. I’m older now, and retired. I no longer do software development, in any programming language. I have other interests that I find more compelling. I spend time with my family. I play and promote petanque. Life is good, but it is different.
During the course of my software development career I’ve had occasion to shut down a number of projects. On every occasion when I turned over a project to a new owner, the results were disappointing. Consequently, I have decided to shut down the EasyGui project rather than to try to find a new owner for it.
The EasyGui software will remain frozen in its current state. I invite anyone who has the wish, the will, the energy, and the vision to continue to evolve EasyGui, to do so. Copy it, fork it, and make it the basis for your own new work.— Steve Ferg, March 6, 2013
"No shit, the world is wrong! It ain't got a clue! But here, in this one minute video, I will explain what's wrong and how I have discovered the right way."
As soon as you encounter something that can be reduced to the above, it's a pretty fair sign that the author doesn't know what he's talking about. Anyone who thinks they're so unique that they can come up with something nobody else has thought before is, with likelihood bordering on certainty deluded. The world is full of smart people who have encountered the problem before. Any problem.
That means that a video that's been doing the rounds on how "Computer Color is Broken" is a case in point. The author brings us his amazing discovery that linear rgb is better than non-linear, except that everyone who's been working on computer graphics has known all of that, for ages. It's textbook stuff. It's not amazing, it's just the way maths work. The same with the guy who some years ago proved that all graphics applications scale the WRONG way! "And how much did you pay for your expensive graphics software?", he asked. "Eh? You sucker, you got suckered", he effectively said, "but fortunately, here's me to put you right!" It's even the same thing, actually.
Whether it's about color, graphics, or finding the Final Synthesis between Aristotle and Plato, this is my rule of thumb: people who think everone else in the world is wrong, are certainly wrong. (Also, Basque really is not the mother of all languages.)
And when it comes to color blending or image scaling: with Krita you got the choice. Use 16 bit rgb with a linear color profile, and you won't see the artefacts, or don't use it, and get the artefacts you probably were already used to, and were counting on for the effect you're trying to achieve. We've had support for that for a decade now.
Note: I won't link to any of these kookisms. They get enough attention already.
The Drupalize.Me team typically gets together each quarter to go over how we did with our goals and to plan out what we want to accomplish and prioritize in the upcoming quarter. These goals range from site upgrades to our next content sprints. A few weeks ago we all flew into Atlanta and did just that. We feel it is important to communicate to our members and the Drupal community at-large, what we've been doing in the world of Drupal training and what our plans are for the near future. What better way to do this than our own podcast.
Today "everyone" is online in one form or another, and it has transformed how many people connect, communicate, share and collaborate with others. To think that the Internet really only hit the mainstream some 20 years ago. It has been an amazingly swift and far-reaching shift that has touched people's personal and professional lives.
So it is no surprise that the concept of eGovernment is a hot one and much talked about. However, the reality on the ground is that governments tend not to be the swiftest sort of organizations when it comes to adopting change. (Which is not a bad thing; but that's a topic for another blog perhaps.) Figuring out how to modernize the communication and interaction of government with their constituencies seems to largely still be in the future. Even in countries where everyone is posting pictures taken on their smartphones of their lunch to all their friends (or the world ...), governments seem to still be trying to figure out how to use the Internet as an effective tool for democratic discourse.
The Netherlands is a few steps ahead of most, however. They have an active social media presence which is used by numerous government offices to collaborate with each other as well as to interact with the populace. Best of all, they aren't using a proprietary, lock-in platform hosted by a private company oversees somewhere. No, they use a free software social media framework that was designed specifically for this: Pleio.
They have somewhere around 100,000 users of the system and it is both actively used and developed to further the aims of the eGovernment initiative. It is, in fact, an initiative of the Programme Office 2.0 with the Treasury department, making it a purposeful program rather than simply a happy accident.
In their own words:
The complexity of society and the need for citizens to ask for an integrated service platform where officials can easily collaborate with each other and engage citizens.
In addition, hundreds of government organizations all have the same sort of functionality needed in their operations and services. At this time, each organization is still largely trying to reinvent the wheel and independently purchase technical solutions.
That could be done better. And cheaper. Gladly nowadays new resources are available to work together government-wide in a smart way and to exchange knowledge. Pleio is the platform for this.
Just a few days ago it was anounced publicly that not only is the Pleio community is hard at work on improving the platform to raise the bar yet again, but that Kolab will be a part of that. A joint development project has been agreed to and is now underway as part of a new Pleio pilot project. You can read more about the collaboration here.
Photo credit: Gerd Altmann; License: CC0 So, let's review: The mission of the PSF is to:
[…] promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers. The PSF takes this mission seriously. Last year, the Board of Directors changed the membership by-laws in order to make the PSF a more inclusive and diverse organization. Since then, the PSF leadership has been working on ways to build on that change. The recent non-binding poll of voting members (PSF Blog) is one such tactic. Another is a new procedure for strategic decision-making recently proposed by PSF Director Nick Coghlan. Last week, Nick posted this proposal on the Members' List for discussion (it's also on the Python wiki). According to Nick,
One step we are proposing is to have a more open strategic decision making process where significant decisions which don’t need to be made quickly, and which don’t require any confidentiality, can be discussed with the full PSF membership before being placed before the Board as a proposed resolution. The new guidelines are similar to the process used for Python Enhancement Proposals (PEP)–whereby developers and user groups make suggestions for design decisions to amend the Python language (PEP). Nick also took inspiration from Red Hat’s “Open Decision Making Framework,” and the Fedora change approval process. Since this proposal is itself the first instance of its use (in what Nick calls “a delightfully meta exercise”), it’s important that the membership review it and offer feedback. And if you’re not a member but would like to become one, see Enroll as a Voting Member to sign up.
Below I’ve excerpted some of the basic ideas from the text of the proposal, but I urge members to read the entire draft before weighing in. PSF Strategic Decision Making Process The primary mechanism for strategic decision making in the PSF is through resolutions of the PSF Board. Members of the PSF Board of Directors are elected annually in accordance with the PSF Bylaws, and bear the ultimate responsibility for determining “how” the PSF pursues its mission [...]
However, some proposed clarifications of or changes to the way the PSF pursues its strategic priorities are of sufficient import that they will benefit from a period of open discussion amongst the PSF membership prior to presentation for a Board resolution [...] Non-binding polls of PSF Voting Members At their discretion, the PSF Board may choose to include non-binding polls in ballots issued to PSF members [...] Proposals for Discussion Any PSF Member (including Basic Members) may use the Wiki to submit a proposal for discussion with the full PSF membership [...] Proposals for Resolution
Any PSF Director or Officer may determine that a particular proposal is ready for resolution [...]
Proposals submitted for resolution will be resolved either directly by a Board resolution, or, at the Board’s discretion, by a full binding vote of eligible PSF Voting Members. Nick is also currently drafting proposed guidelines for “PSF Strategic Priorities” and for procedures for recognition and promotion to the designation of “PSF Fellow."
Stay tuned to the members' list and to this blog to stay informed and to participate in the discussion and adoption of these additional proposals to improve the PSF's role as an organization that truly reflects and supports the needs and views of its membership.
I would love to hear from readers. Please send feedback, comments, or blog ideas to me at email@example.com.
Over the time I started to get more and more requests to have python-gammu working with Python 3. Of course this request makes sense, but I somehow failed to find time for that.
Also for quite some time python-gammu has been distributed together with Gammu sources. This was another struggle to overcome when supporting Python 3 as in many cases users will want to build the module for both Python 2 and 3 (at least most distributions will want to do so) and with current CMake based build system this did not seem to be easy to achieve.
So I've decided it's time to split python module out of the library. The reasons for having that together are no longer valid (libGammu has quite stable API these days) and having standard module which can be installed by pip is a nice thing.
Once the code has been put into separate git module, I've slowly progressed on porting to Python 3. Most of the problems were on the C side of the code, where Python really does not make it easy to support both Python 2 and 3. So the code ended up with many #ifdefs, but I see no other way. While doing these changes, many points in the API were fixed to accept unicode stings in Python 2 as well.
Anyway, today we have first successful build of python-gammu working on both Python 2 and 3. I'm afraid there is still some bug leading to occasional segfaults on Travis, but not reproducible locally. But hopefully this will be fixed in upcoming weeks and we can release separate python-gammu module again.
If you are new to Drupal, take a look at our previous blog New To Drupal? These Videos Will Help You Get Started. If you just got started in Drupal, how about we provide you these short but thorough tutorial videos on Working with Content.... Read more
This is the third post in a series of interviews about the people at Astro Code School. This one is about Colin Copeland the CTO and Co-Founder of Caktus Consulting Group. He’s one of the people who came up with the idea for Astro Code School and a major contributor to it's creation.
Where were you born?
What was your favorite childhood pastime?
Spending time with friends during the Summer.
Where did you go to college and what did you study?
I went to Earlham College and studied Computer Science.
How did you become a CTO of the nation's largest Django firm?
I collaborated with the co-founders on a software engineering project. We moved to North Carolina to start the business. I was lucky to have met them!
How did you and the other Caktus founders come up with the idea to start Astro Code School?
Caktus has always been involved with trainings and trying to contribute back to the Django community where possible, from hosting Django sprints to leading public and private Django trainings on best practices. We're excited to see the Django community grow and saw an opportunity to focus our training services with Astro.
What is one of your favorite things about Python?
Readability. Whether it's reading through some of my old source code or diving into a new open source project, I feel like you can get your bearings quickly and feel comfortable learning or re-learning the code. The larger Django and Python communities are also very welcoming and friendly to new and long time members.
Who are your mentors and how have they influenced you?
So many, but especially my Caktus business partners and colleagues.
Do you have any hobbies?
I'm co-captain of the Code for Durham Brigade.
Which is your favorite Sci-fi or Fantasy fiction? Why?
A big problem of KDE Activities is their name. It builds up a poor mental model and thus makes life hard for users. With this post we ask you to help us finding a better name for the underlying concepts.
Keep on reading: Help to find better metaphors
If you’re working on a site that needs subscriptions, take a look at Recurly. Recurly’s biggest strength is its simple handling of subscriptions, billing, invoices, and all that goes along with it. But how do you get that integrated into your Drupal site? Let’s walk through it.There are a handful of pieces that work to connect your Recurly account and your Drupal site.
- The Recurly PHP library.
- The recurly.js library (optional, but recommended).
- The Recurly module for Drupal.
The first thing you need to do is bookmark is the Recurly API documentation.
Note: The Drupal Recurly module is still using v2 of the API. A re-write of the module to support v3 is in the works, but we have few active maintainers right now (few meaning one, and you’re looking at her). If you find this module of use or potential interest, pop into the issue queue and lend a hand writing or reviewing patches!
I’ll be using a new Recurly account and a fresh install of Drupal 7.35 on a local MAMP environment. I’ll also be using drush as I go along (Not using drush?! Stop reading this and get it set up, then come back. Your life will be easier and you’ll thank us.)
- The first step is to sign up at https://recurly.com/ and get your account set up with your subscription plan(s). Your account will start out in a sandbox mode, and once you have everything set up with Recurly (it’s a paid service), you can switch to production mode. For our production site, we have a separate account that’s entirely in sandbox mode just for dev and QA, which is nice for testing, knowing we can’t break anything.
- Recurly is dependent on the Libraries module, so make sure you’ve got that installed (7.x-2.x version). drush dl libraries && drush en libraries
- You’ll need the Recurly Client PHP library, which you’ll need to put into sites/all/libraries/recurly. This is also an open-source, community-supported library, using v2 of the Recurly API. If you’re using composer, you can set this as a dependency. You will probably have to make the libraries directory. From the root of your installation, run mkdir sites/all/libraries.
- You need the Recurly module, which comes with two sub-modules: Recurly Hosted Pages and Recurly.js. drush dl recurly && drush en recurly
- If you are using Recurly.js, you will need that library, v2 of which can be found here. This will need to be placed into sites/all/libraries/recurly-js.
Your /libraries/ directory should look something like this now:
You can just use the library and the module, which include some built-in pages and basic functionality. If you need a great deal of customization and your own functionality, this might be the option for you.
Recurly offers hosted pages, for which there is also a Drupal sub-module. This is the least amount of integration with Drupal; your site won’t be handling any of the account management. If you are low on dev hours or availability, this may be a good option.
Thirdly, and this is the option we are using for one of our clients and demonstrating in this tutorial, you can use the recurly.js library (there is a sub-module to integrate this). Recurly.js is a client-side credit-card authorization service which keeps credit card data from ever touching your server. Users can then make payments directly from your site, but with much less responsibility on your end. You can still do a great deal of customization around the forms – this is what we do, as well as customized versions of the built-in pages.
Please note: Whichever of these options you choose, your site will still need a level of PCI-DSS Compliance (Payment Card Industry Data Security Standard). You can read more about PCI Compliance here. This is not prohibitively complex or difficult, and just requires a self-assessment questionnaire.Settings
You should now have everything in the right place. Let’s get set up.
- Go to yoursite.dev/admin/config (just click Configuration at the top) and you’ll see Recurly under Web Services.
- You’ll now see a form with a handful of settings. Here’s where to find the values in your Recurly account. Once you set up a subscription plan in Recurly, you’ll find yourself on this page. On the right hand side, go to API Credentials. You may have to scroll down or collapse some menus in order to see it.
- Your Private API Key is the first key found on this page (I’ve blocked mine out):
- Next, you’ll need to go to Manage Transparent Post Keys on the right. You will not need the public key, as it’s not used in Recurly.js v2.
- Click to Enable Transparent Post and Recurly.js v2 API.
- Now you’ll see your key. This is the value you’ll enter into the Transparent Post Private Key field.
- The last basic setup step is to enter your subdomain. The help text for this field is currently incorrect as of 3/26/2015 and will be corrected in the next release. It is correct in the README file, and on the project page. There is no longer a -test suffix for sandbox mode. Copy your subdomain either from the address bar or from the Site Settings. You don’t need the entire url, so in my case, the subdomain is alanna-demo.
- With these settings, you can accept the rest of the default values and be ready to go. The rest of the configuration is specific to how you’d like to set up your account, how your subscription is configured, what fields you want to record in Recurly, how much custom configuration you want to do, and what functionality you need. The next step, if you are using Recurly’s built-in pages, is to enable your subscription plans. In Drupal, head over to the Subscription Plans tab and enable the plans you want to use on your site. Here I’ve just created one test plan in Recurly. Check the boxes next to the plan(s) you want enabled, and click Update Plans.
So you have Recurly integrated, but how are people going to use it on your Drupal site? Good question. For this tutorial, we’ll use Recurly.js. Make sure you enable the submodule if you haven’t already: drush en recurlyjs. Now you’ll see some new options on the Recurly admin setting page.
I’m going to keep the defaults for this example. Now when you go to a user account page, you’ll see a Subscription tab with the option to sign up for a plan.
Clicking Sign up will bring you to the signup page provided by Recurly.js.
After filling out the fields and clicking Purchase, you’ll see a handful of brand new tabs. I set this subscription plan to have a trial period, which is reflected here.
Keep in mind, this is the default Drupal theme with no styling applied at all. If you head over to your Recurly account, you’ll see this new subscription.
There are a lot of configuration options, but your site is now integrated with Recurly. You can sign up, change, view, and cancel accounts. If you choose to use coupons, you can do that as well, and we’ve done all of this without any custom code.
If you have any questions, please read the documentation, or head over to the Recurly project page on Drupal.org and see if it’s answered in the issue queue. If not, make sure to submit your issue so that we can address it!
Keep Calm and Clear Cache!
This is an often used phrase in Drupal land. Clearing cache fixes many issues that can occur in Drupal, usually after a change is made and then isn't being reflected on the site.
But sometimes, clearing cache isn't enough and a registry rebuild is in order.
The Drupal 7 registry contains an inventory of all classes and interfaces for all enabled modules and Drupal's core files. The registry stores the path to the file that a given class or interface is defined in, and loads the file when necessary. On occasion a class maybe moved or renamed and then Drupal doesn't know where to find it and what appears to be unrecoverable problems occur.
One such example might be if you move the location of a module. This can happen if you have taken over a site and all the contrib and custom modules are stored in the sites/all/modules folder and you want to separate that out into sites/all/modules/contrib and sites/all/modules/custom. After moving the modules into your neat sub folders, things stop working and clearing caches doesn't seem to help.
Enter, registry rebuild. This isn't a module, its a drush command. After downloading from drupal.org, the registry_rebuild folder should be placed into the directory sites/all/drush.
You should then clear the drush cache so drush knows about the new command
drush cc drush
Then you are ready to rebuild the registry
Registry rebuild is a standard tool we use on all projects now and forms part of our deployment scripts when new code is deployed to an environment.
So the next time you feel yourself about to tear your hair out and you've run clear cache ten times, keep calm and give registry rebuild a try.
by Björn Breitmeyer
It has been very quiet around WEC platform support in Qt, and you would have been forgiven for thinking that nothing was happening. But behind the scenes, we have been tackling some pretty hard issues. We just did not blog about the ongoing work….until now.
Be assured that the platform is still maintained and there is work happening. Here is a short overview of the work my co-worker at KDAB, Andreas Holzammer and myself have done on the WEC support.
Qt Multimedia Qt Multimedia is still listed as a not supported module for WEC. This has changed slightly as we reintroduced the ability to playback audio files based on the DirectShow backend: https://codereview.qt-project.org/#/c/93093/.
The post The current state of Windows Embedded Compact (WEC) platform support in Qt appeared first on KDAB.
I have a standard format for patchnames: 1234-99.project.brief-description.patch, where 1234 is the issue number and 99 is the (expected) comment number. However, it involves two copy-pastes: one for the issue number, taken from my browser, and one for the project name, taken from my command line prompt.
Some automation of this is clearly possible, especially as I usually name my git branches 1234-brief-description. More automation is less typing, and so in true XKCD condiment-passing style, I've now written that script, which you can find on github as dorgpatch. (The hardest part was thinking of a good name, and as you can see, in the end I gave up.)
Out of the components of the patch name, the issue number and description can be deduced from the current git branch, and the project from the current folder. For the comment number, a bit more work is needed: but drupal.org now has a public API, so a simple REST request to that gives us data about the issue node including the comment count.
So far, so good: we can generate the filename for a new patch. But really, the script should take care of doing the diff too. That's actually the trickiest part: figuring out which branch to diff against. It requires a bit of git branch wizardry to look at the branches that the current branch forks off from, and some regular expression matching to find one that looks like a Drupal development branch (i.e., 8.x-4.x, or 8.0.x). It's probably not perfect; I don't know if I accounted for a possibility such as 8.x-4.x branching off a 7.x-3.x which then has no further commits and so is also reachable from the feature branch.
The other thing this script can do is create a tests-only patch. These are useful, and generally advisable on drupal.org issues, to demonstrate that the test not only checks for the correct behaviour, but also fails for the problem that's being fixed. The script assumes that you have two branches: the one you're on, 1234-brief-description, and also one called 1234-tests, which contains only commits that change tests.
The git workflow to get to that point would be:
- Create the branch 1234-brief-description
- Make commits to fix the bug
- Create a branch 1234-tests
- Make commits to tests (I assume most people are like me, and write the tests after the fix)
- Move the string of commits that are only tests so they fork off at the same point as the feature branch: git rebase --onto 8.x-4.x 1234-brief-description 1234-tests
- Go back to 1234-brief-description and do: git merge 1234-tests, so the feature branch includes the tests.
- If you need to do further work on the tests, you can repeat with a temporary branch that you rebase onto the tip of 1234-tests. (Or you can cherry-pick the commits. Or do cherry-pick with git rev-list, which is a trick I discovered today.)
Next step will be having the script make an interdiff file, which is a task I find particularly fiddly.Tags: gitpatchingdrupal.orgworkflow
Kubuntu Vivid Beta 2 is out. This is the first major distro to ship with Plasma 5 so it’ll be the first time many people get to see our lovely new desktop. Scary.
We have 24 bugs I’ve milestoned and 1 month to go until release, let’s see how low we can go. Many of the bugs are easy enough to fix and just need twiddling the bits in the packaging. Some are more complex. If you want to help out come and join us in #kubuntu-devel we’d appreciate just testing the ISOs for sanity.
Alas upgrade from 14.10 is currently broken due to a bug which is probably in apt , fix soon I hope.by