15.12.2015Check the status of all vagrant boxes on your system

If you are like me, chances are you have a vagrant setup for each project your working on. It's a great way of sharing a specific system configuration, especially if you work in a team of developers. In my team, we have people running their systems with different operating systems and settings in general. So vagrant is a real life-saver when it comes to developing together and agreeing on a setup for a specific project.

But there are some pain points. If you turn off your machine, but forget you have a vagrant box running, the shutdown gets stuck, because it waits for the virtual network interface to be freed, which never happens To solve this issue, it's important to know, what instances are still running. If you only have one or two setups, this might be easy. But it soons gets a tedious task to keep track by yourself.

Luckily vagrant provides us with a nice command to show all boxes on a machine called global-status:

vagrant global-status

This shows all instances, that are known to vagrant. The results are cached and updated from time to time. In my case, it didn't recognize all of my instances. Unfortunately, the only way to change that is the rebuild the box.

What I did instead is create a little script, that collects it's own data (based on the existance of .vagrant directories). This gives a more fine-grained view of the system and I have better control over when the data is updated.

vagrant-cache in action

vagrant-status collects the data and updates the cache (slow operation). I run it every 15min as a cronjob, to keep the cache up-to-date.

vagrant-cache simply displays the result of the last run of vagrant-status.

You can find the scripts on GitHub.

02.12.2015E-Voting II

As I already mentioned before, I'm not a big fan of E-Voting.

Last Monday, Parldigi invited to an event about E-Voting:

Grünenfelder warnte davor, dass Unterbrechungen die Verbreitung und Akzeptanz von E-Voting mehr gefährden als drohende Sicherheitsrisiken. Es sei darum notwendig, dass Bund, Kantone und die Bundeskanzlei das E-Voting und die Digitalisierung der Politik weiter vorantreiben. "Die Erfolgsgeschichte wird weitergehen", sagte Grünenfelder. "Allen Unkenrufen zum Trotz."

As long as we don't have an open source solution, that is accessible to everyone and which makes it possible for everyone to verify that their vote has been casted, we can stop all further discussions.

My fear is, that E-Voting will just be an accepted reality, without any discussions about it. I'm not against technology, by no means. But we need to have a discussion as a society what this means for us. And if we are okay with the consequences. Now is the time to discuss this. Not when all systems are in place and everybody already uses them. Or when they get hacked for the first time.

Update: Someone mentioned to me, that we have the option to vote by snail mail and that my critic is invalid, as this way of voting is just as bad, but already accepted. For the record: just because there is another way of casting my vote, that is equally bad and hard to control/verify, does not mean that the critic is invalid. It only means that we have to think about this other way as well. And that the critics may apply as well.

01.12.2015Open Data MashUp @ Impact Hub

Last week I had the opportunity to talk at the Impact Hub in Zurich about Open Data. This event was part of a group of events in different locations of Impact Hub, called MashUp.

The talk was held in the Pecha Kucha format, which means you have 20 slides and 20 seconds for each slide. Because of this condensed form of giving a talk, there is not much space for text on the slides, rather images, graphics etc. This is just a warning, as the slides without my talk might not be very useful. Feel free to either fill the gaps yourself or contact me if you need more information or if you want to hear the talk again.

Download the PDF of my slides or see them directly here:

17.09.2015Why I'm against E-Voting

In Switzerland we'll have the national elections coming up on the 18th October 2015. It's again that time where the voices are getting louder. Especially in those moments, but actually before every political vote in Switzerland (which happens several times a year), there seems to be a rising interest in the subject of E-Voting.

As I work in IT, people ofter assume I'm waiting for that moment, when it's finally allowed to do that (actually there are two cantons that have the permission, but I don't live in one of those). But actually it's the opposite. And probably it's because I work in IT. I had this conversation already several times, so I though it's time to write down my main arguments for the most common objections.

Feel free to correct me :)

I can haz vote on my smartphone on my way to work?

Democracy is hard. It's not designed to be easy, and "Voting" is the crucial core of democracy. Voting is a public act, that must be accountable, while remaining each persons voting secrecy. And to achieve that we already have a perfect system in place: you can either vote in person or send the ballot by mail.

It's not that hard, but it takes some time. We're not talking about a Facebook "Like" click or a Tinder "Hot or Not" swipe, we're talking about our polical rights and our duty as citizens to not easily give up the power we have.

E-Voting will increase the participation in the elections, especially the young will vote if there would be E-Voting

Bullshit! If someone doesn't care about voting, they will not start doing so if they get another channel. Even the Federal Chancellery had to admit that the participations didn't increase in their tests (see also here, here and here)

E-Banking is secure, why should E-Voting be less secure?

First of all: E-Banking is not secure. They try their best to make it so, but there are so many possibilities to overcome a system, it's impossible to say one of these systems is secure. But the banks are willing to take the risk. This is up to them, and I myself am a happy E-Banking user.

I do this with the knowledge, that if something goes wrong, it can be fixed. Or if I lose money due to my own stupidity, so be it. I'm willing to take this risk as well.

But all this does not apply to a core element of democracy. There is no one that covers for a fraudulent election. The risks that are inherent to using computers are simply not acceptable.

Fraud is also possible in the classic voting system. You could count wrong. Or enter the wrong number when transfering it the next department. But E-Voting would make it possible to commit systematic fraud.

All you had to do, is to attack a single point. If you can change the software, you might be able to count differently. Or you could do a Denial-of-Service attack and prevent people from voting online.

The software they use is security-audited and state-of-the-art

Yes, I'm sure about that. But that doesn't change a thing. This simply states, that it's hard to do, but not impossible. Plus we can't be sure that the auditors catch everything. Of course a good way to avoid or mitigate this kind of problem is to use open source software. I must be able to read it's source code. So we can stop talking about all those closed-source solutions, where we'll never be able to tell if it's doing what it should be doing. The consequence of this would be, that only people that can code, can verify the E-Voting system, where the current (paper-based) system, is immediately clear for everybody.

There is a famous competition called "Underhanded C". The goal is to write innocent-looking code, that passes visual inspection, but has a harmful side-effect. So it's definitely possible to do. It's only a matter of time.

But let's assume for a moment, that this perfect piece of software exists, and it has been audited and it works and does no evil. We still have a problem: there is no way of telling what code currently runs on a computer. So maybe the software has been replaced between the review and the runtime. Or it could have been changed at runtime. We simply cannot say. If we could, we would have solved the Halting problem (see also this excellent explanation about why this is the case).

We as a society have to decide if we're willing to take the risk, that the E-Voting system might get hacked. That it might be broken. That it might have been compromised. The answer for me is clear: it's broken by design, and it's way too important to take chances here.

Let's keep the current system

The current system is not broken. Why should be replace it with a broken one? The current system allows us to watch the voting process. We can see the ballots, we can count them.

By hiding that inside a computer, there is nothing we can watch. We can only trust in the system. It's way to important to just leave it like that.

That said, I'm always open to discuss alternatives. Maybe there comes a time, where we discover something fundamental new, that it'll be possible. But this new thing is not just a new algorithm. If it's based on the same principles as I've written above, it is not accepable for a democracy.

20.07.2015Learn to code if you want to. I'm happy to help.

Even though I'm not an education expert, and I don't really have a professional background in didactics, I have some experience when I comes to teach people how to code, especially if they never wrote a single line of code before.

The topic of teaching accompanies me since some time now. For one, the most obvious one, I'm an apprentice trainer. This means it's part of my job, to teach our young apprentices how to program. But it doesn't stop there. I'm quite active on StackOverflow (although one could argue this doesn't count as teaching). Then last year, I got the chance to hold a workshop at the Jugendmedientage to teach programming to young journalists, that never coded before. All these experiences led me to believe that:

  • Programming can be learned
  • It takes time, effort and patience for both the teacher and the student
  • Curiosity is the key

Workshop at Jugendmedientage 2014

I'm writing about this topic, because not long ago we were discussing this topic at work. One of our project managers said, she wants to learn at least a bit of programming, in order to understand what her team is doing. Intuitively I disagreed. I argued, that it's not the responsibility of a project manager to understand programming. Everybody should focus on their main responsibilities and skills.

But then again:

  • Wouldn't it be nice to have a common understanding of some basics?
  • It would be a tremendous help, if a PM could already decide if a problem is hard or easy
  • Or if a PM understands some technical terms that get thrown around in the room all the time?
  • Why should anybody stop someone from learning something they are clearly interested in?

But it's not just about PMs, and it's not just about my work place. It's nice to be able to code. To know, that this task, you've been doing manually since so many years could be written in just a few lines of code. Your life can get easier.

One of the projects I really like is OpenTechSchool. It aims to create free online learning material for everybody to learn about programming and related topics. Apart from the docs, there are workshops, free of charge, led by volunteers like me.

The goal of such workshops is not to become a super-hero-programming-ninja. It's about understanding how a programmer tackles a problem, learn new skills and grasp some basic concepts.

Everbody can learn to code. If they want to. I'm happy to help.

01.01.2015Why and how I created the OpenERZ API

The holidays season is always a good point in time to reflect and sort ideas/thoughts etc. A number of reasons lead me to create the OpenERZ API.

It all started about a month ago, when I got the new calendar of Entsorung und Recycling Zürich (ERZ) with the dates for the different waste collections. This is a phyisical paper thingy that is sent to all households in Zurich. There was also an information attached that ERZ has its own iOS and Android app, so you have the same data in an eletronic format on your personal smartphone.

That's when it got interesting. I knew that ERZ already published their data as CSVs on the Open Data portal of the City of Zurich. I also knew that my fellow open data enthusiasts already created a smartphone app with that data. What I didn't know at that point was, if there has been any kind of communication between those parties or how this all fits together.

Via twitter I got confirmation, that there was indeed no communication. Unfortunately ERZ did not contact the original developers and did not engage with the official Open Data team of the City of Zurich. André Golliez wrote an "open letter" to the CEO of ERZ. After that there were replies, meetings etc. to clear things up.

What really caught my attention was the reply of SR Filippo Leutenegger, where he stated:

"Die Datenbank steht im Zentrum, die App «Sauberes Zürich» ist mit ihren drei Betriebssystemen ein Output aus der Datenbank. Die Datenbank steht im Zentrum, die App «Sauberes Zürich» ist mit ihren drei Betriebssystemen ein Output aus der Datenbank."

So they created a database, which makes it easier for them to update the data. And this database is used for the print products, the apps and possible other clients like their website (not sure if that's the case though). For me it was clear, that there is a need for an API on top of this data, no matter what client uses this. A database is a good start but an API is how data becomes really useful and easy to integrate.

My first thought was, that the developers of the new app already created an API, but just didn't tell anyone. That would've been perfect, as all I had to do, was find this hidden API and create a public proxy to it. To investigate this, I downloaded the app, setup a Charles proxy and looked at the communication (thanks to my colleague Franz for pointing me in the right direction here!).

I quickly found the calls of the app:

  • http://erz.w-4.ch/export/metadata.json (first call)
  • http://erz.w-4.ch/export/wastetypes.json
  • http://erz.w-4.ch/export/calendars-2014.json
  • ... etc.

So there we have all the data. The metadata.json contains a list of available files, which are then requested by the client. It looks like an export from the above-mentioned database, which is loaded when the app starts. Of course, this is a kind of API, but not one that you can query for single records.

At this point, I decided to create the OpenERZ API. The JSON from the app are not very clear and self-explanatory (they were not designed with that in mind). So I decided to use the CSVs from the Open Data portal as a basis instead. It was made clear by SR Filippo Leutenegger, that ERZ wants to continue to provide this data, so I think its a safe choice to build the API on top of that.

Screenshot of the OpenERZ swagger UI

The OpenERZ API is a node application that runs on Heroku. The process is quite simple: the CSVs are converted to JSON and transformed to be in a clear format. After that, the nicely formatted JSON is imported in a MongoDB as documents. Later this allows us to query the data easily. And by using MongoDB, it's easy to extend the data model at a later point without breaking the application.

The routing part of the API is build using the Hapi framework and its ecosystem (Joi, hapi-swagger etc.). After some iterations the current API gained some features:

  • route for all the different waste types
  • a list of stations
  • data can be filtered and sorted by different parameter
  • GeoJSON of all the cargo and e-tram stops
  • For the sake of completeness, I added the JSONs of the app as well, those are available at the /export route.

All the code is on GitHub, feel free to contribute, either with code, ideas or bug reports and of course use the API for whatever you want :)

18.09.2014Workshop about Open Data Technologies

At todays Opendata.ch Conference 2014 I had the opportunity to talk about technologies around the topic of open data. I gave a quick overview over different approaches that are being used including CKAN, The DataTank and Data Central.

On the technical side a talked about (Tabular) Data Packages and a little bit about JSON-LD. Did you know, that GMail is already using it?.

Here are my slides (German only, sorry!):

In the discussions that followed we talked about why we need Open Data portals (conclusion: mainly for the PR) and if we prefer to wait until a data provider "cleaned" it's data or rather want it fast, so that the community can help to clean it up (consensus: quick and dirty).

For me, today was a really fun conferences. I met lots of new people, could catch up with already familiar ones and listen to a lot of great talks.

Update: Now the videos of the conference have been published. My session has not been recorded, but at the end I could quickly present our findings (again German only):

08.07.2014Q&A Wall at the OKFestival in Berlin

Some time ago the OKFestival asked for session proposals for this years conference in Berlin. I was at the conference last year in Geneva and enjoyed very much. It's just a nice atmosphere to talk to people from divers backgrounds (I'll get back to that in a minute).

Me and my colleagues from Liip thought about sessions for this years topic "Open Minds to Open Action". The Programme FAQs gave us some hints, what it takes for a session to be selected:

  • Does the proposal advance the overall mission of the event
  • Does the proposal offer participants a concrete, valuable outcome?
  • Does the proposal advance the conversation around its area of open?
  • Is the proposal interactive? – hint: slides and lectures strongly discouraged
  • Does the proposal value or build on OKFestival’s principles of inclusivity and diversity?

Help newbies at the conference

We asked ourselves, how we can help people to connect to each other? We pictured a person that is interesting in the "Open Knowledge" topic but doesn't know any people at the conference. The first step is always the hardest. How do you find people that are interested in the same topics? Of course, one can attend sessions and try to talk to the people there.

But sometimes a little extra help is required. Some conference solve this problem by providing some kind of buddy system (e.g. the Chaos Mentors at the 30C3). The buddies are volunteers that show newbies around, and show them the venue, introduce them to people and so on. This is great for large conferences but might be an overkill for smaller ones.

As we were brainstorming about this we came up with the concept of a place, where everybody is encourages to write down his/her questions, so that others can pick that up and answer them right away or provide guidance. We first thought about doing this online using something like a twitter hashtag. But in the end, this is now why you went to a conference. You want to meet real people to talk face-to-face.

Q&A Wall to the rescue

So the challange is just, to find these people. To help making this asychronous, we developed the idea of a physical Q&A wall. This wall will be at the venue and provides space to write down questions (think of sticky notes or whiteboards to directly write on). Like that, this wall provides a meeting point or to simply read and write questions.

I'm very interested in how this wall is being used. Maybe Q&A is not the only thing that could be done with it. Everybody is invited to use that space. There are similar approaches to use such walls or boards to share news in local communities ("Community Chalkboards").

Btw: the Q&A Wall is also listed as a session at the OKFestival!

11.02.2014Usage of MongoDB with a spatial index and GeoJSON

During the Zurich Hacknights 2013 I was part of the team, that wanted to build a guide for monuments in Zurich. We were a large group of people with lots of ideas. Unfortunately I was the only web developer at that time, so I did my best to create a prototype of the guide called Denkmap.

There is a webservice (WFS) of the GIS-ZH containing all monuments of the Canton. I converted the source to GeoJSON using ogr2ogr:

wget -O DenkmalschutzWFS.gml http://maps.zh.ch/wfs/DenkmalschutzWFS?service=wfs&version=1.1.0&request=GetFeature&typeName=denkmalschutzobjekte
ogr2ogr -f geoJSON -s_srs EPSG:21781 -t_srs EPSG:4326 denkmal.geojson DenkmalschutzWFS.gml

I added the result as a layer to the map. This worked, but unfortunately the GeoJSON file is about 7MB. And each client has to download it. This takes time, especially on mobile devices not connected to a WLAN.

For a prototype this was okay, but how can we solve this for the future?

At this point it was clear that I need some part running on a server. This part must be able to divide all my data in reasonable chunks, that the client is able to download in a short time. Because I'm already dealing with GeoJSON and I didn't yet use MongoDB, I wanted to give it a try. I read that there is a spatial index and the possibility to run spatial queries. Hence, I had my perfect candidate. The other obvious solution would have been PostgreSQL with PostGIS. This would have worked too, but I wanted to try something new.

Import GeoJSON in MongoDB

This is quite easy if you respect the rules set by the mongoimport command:

  1. There used to be a FeatureCollection, which basically is a big array. I removed it to get a list of objects.
  2. Only use the MongoDB-supported subset of GeoJSON (i.e. Point, LineString and Polygon)
  3. The file must contain exactly one object per line, remove extra lines to make the importer happy

Once this is done, you can run the following to import your GeoJSON in a database called denkmap and a collection called wfsktzh:

mongoimport --db denkmap --collection wfsktzh < denkmal.geojson
mongo denkmap --eval "db.wfsktzh.ensureIndex({geometry: '2dsphere'})"

Create RESTful interface to consume the data

Once the data is save and sound in MongoDB, all you need is a script to expose exactly the data you need.

The Sinatra-like express framework helps to create the structure:

var express = require('express'),
    geodata = require('./geodata.js'),
    app = express(),
    port = 3000;

app.get('/nearby/:lat/:lon', function(req, res) {
    console.log(req.method + ' request: ' + req.url);
    var lat = parseFloat(req.params.lat),
        lon = parseFloat(req.params.lon);
    geodata.nearby(lat, lon, function(docs) {

console.log('Listening on port ' + port + '...');

This allows us to query MongoDB using URLs like this: http://localhost:3000/nearby/47.3456/8.5432

The geodata.js contains the spatial query using $near. It returns all results in 1000m distance from the given coordinates (lat/lon). mongojs is a Node.js library for MongoDB:

var dbUrl = "denkmap",
    coll = ["wfsktzh"],
    db = require("mongojs").connect(dbUrl, coll);

exports.nearby = function(lat, lon, callbackFn) {
        geometry: {
            $near: { 
                $geometry : {
                    type: "Point",
                    coordinates: [lon, lat]
                $maxDistance: 1000
    function(err, res) {

By the way: the project is still active and we are still looking for people to participate. The next step would be to connect DenkMap to WikiData and display some extra information, link to the corresponding Wikipedia page, and/or display images of the monument in the app.

31.01.2014Stop making stupid Open Data contests!

Disclaimer: I work for Liip, the company that build the Open Data pilot portal for Switzerland. I'm one of the developers of the portal. On the other hand, I'm a member of the opendata.ch association.

Although I'm a relatively new member of the Open Data community, I use and work with Open Data in different forms for many years now. I'm a keen enthusiast of the underlying idea of openness, collaboration and knowledge sharing. But the whole idea of Open Government Data (OGD) in Switzerland is currently broken. There are numerous problems and it probably all boils down to false expectations.

To make new data known to the public, often the first idea is to make an app contest. At first this sounds like a good idea: People working with your data and creating an app. It sounds like a win-win situation. Some may even think it's cheap way to get a lot of apps. The problem is, that contests have winners. And when there are winners, there must be losers. Your data is much too valuable, and every contribution is too important to waste time to choose a winner.

  • You limit the interest in the data (why should I continue my project if I just lost a contest?).
  • You limit the collaboration between the different groups that take part in the contest (why should I help somebody else to create a better app, if that decreases my chances to win this contest myself?).

The goal of an OGD data owner should be, that a lot of people know and use your data. This makes it worth to create this data in the first place. It doesn't matter if it is used by a special interest group or by an app used by millions of people.

On the other side, hackathons provide a good environment to play around, talk with different people and hack along. This is the kind of thing we need. Please understand that after such an event there are tons of ideas around. Prototypes, unfinished apps, visualizations etc. This is all very valuable. Don't expect a bunch of polished, ready-to-use apps as a result.

Most data owners consider it a failure if nothing happens after they release data. This is just plain wrong. The whole point of Open Data is to open your data. It's done. It's out there. Anyone that is interested can use this data. You never know when this is. Eventually someone creates a mashup, where you dataset is the missing piece.

What if this does not happen? It doesn't matter. Maybe the time was not right. Maybe this dataset is just not interesting enough. But that doesn't mean it was not worth to open it.

But of course there some things that you can and should do. Here is my little list of DOs and DON'Ts:


  • Talk about your data (speeches, blog posts)
  • Make events to explain your data
  • Bring cool swag to hackathons (for everyone!) :)
  • Invite people to visit you, connect your experts with the community
  • Provide help to process your data
  • Open Source the tools you use internally to make them available for everybody
  • Make sure to involve the media (yes, if you release your data or make an event, invite them)
  • Coordinate with the community, ask questions, don't make assumptions


  • Measure the success of your Open Data by the amount of apps/visualization build with it. Actually, don't measure the success at all. Concentrate on releasing new and interesting data.
  • Expect people to do your work
  • Assume everybody was just waiting for your data and must be grateful for it. Give people time to discover your data
  • Think that a press release is all it takes to get attention from the media
  • If you have a budget for Open Data, don't spend it for someone to create an app with your data. Use it to organize events, invite people, do whatever you are best at!
  • Seriously: No more contests, no more fucking prizes.