Zoe and the World Community Grid

Zoe version 0.10 has been released, bringing a number of improvements to its architecture. The front end (API and web interface) is now decoupled from the back end (scheduling and monitoring), using ZeroMQ to communicate. Moreover we switched from a custom internal state library to an external PostgresQL database. Finally, the scheduler has been revamped to fix some long-standing issues.

These changes bring a lot of robustness to Zoe and pave the way for more features that we are planning and developing.

To stress-test this new version of Zoe, and also Docker Swarm, we run several experiments trying to overload the system in various configurations. But instead of wasting resources running fake jobs, we implemented a ZApp for the Berkley BOINC client.

This ZApp is very simple to use, it takes only the URL and the key for one of the many projects participating to BOINC and runs exactly one task. We submitted thousands of these applications at the same time to Zoe, testing our software and at the same time fighting AIDS, Ebola, Zika and helping several other humanitarian causes through the World Community Grid Project.

After the experiments we left a small script running that starts more BOINC ZApps when our systems are unused, giving us a good long-running test of Zoe without resources going to waste.

Zoe on Swarm in production – first impressions

Last week we had our first session of the Algorithmic Machine Learning laboratory, part of the Data Science and Engineering International Master and study track at Eurecom, in Sophia Antipolis, France and we are excited to report that the first in a row of laboratory sessions has been a big success and Zoe + Swarm could easily handle the load generated by our students.


These laboratory sessions use Zoe, a software developed internally, to schedule and deploy a fully configured, ready to use, container cluster based on Jupyter notebooks and Apache Spark.

This set of analytic services is managed by Zoe as an application description: an high level definition of all the processes required to run complex distributed applications.

Zoe uses Docker Swarm to allocate containers: for these laboratory sessions ten server-grade physical hosts are configured with Docker and registered with a Swarm manager. Each user in Zoe is segregated in a separate overlay network and containers are created by Swarm wherever there are available resources.

When a student wants to work on her assignments, she connects to the Zoe web interface and a “Zoe laboratory session application”, composed by a predefined set of containers, is created dynamically for her exclusive use. After a few hours of inactivity the containers are automatically terminated and the resources are freed.

In this post we would like to also highlight a couple of areas where we think Swarm could benefit from more work, especially for the kind of use case we discuss here.


Accessing overlay networks from outside the Swarm cluster is not easy. Distributed frameworks like Spark, provide a number of interlinked web interfaces that break down when used behind Docker’s port forwarding. Moreover overlay networks use private address spaces that cannot be accessed from the outside without setting up IP routing.

Currently we resort to using a “gateway” container running a SOCKS proxy. Students have to use a browser with the SOCKS proxy configured to access the web interfaces exposed by the containers that are part of Zoe applications.

This solution works well for the web interfaces, but it is far from optimal and we keep looking for possible improvements.


The more containers Swarm is managing, the more time it takes to create new ones. Each Zoe application for our laboratory is composed by 5 containers: multiplied by 25 groups of students, we have 125 containers that are created at the beginning of each laboratory session. Since the same Swarm cluster is used also for other activities, when the session starts there are already a few containers running.

In the graph below, we see that the time needed to create a cluster of five containers increases linearly: the first student group had to wait about 7.5 seconds, but the last one 12 seconds. The difference is not big, but noticeable and the trend clear.

Zoe API latency

Each point represents the time taken by an API call to start a new application execution in Zoe, so the actual time taken by Swarm to create the 5 containers is smaller. This issue is being discussed on the Swarm issue tracker.

The version of Zoe used for these laboratories is open source and available on github, it is stable and developed more slowly, since it is used in production.  We are also working on an experimental version of Zoe, with more advanced concepts in scheduling and dynamic resource allocation.

Talk at DockerCon EU 2015

Our talk “Swarming Spark Applications” submitted at DockerCon 2015 has been accepted! Here is the abstract:

We built Zoe, an open source user-facing service that ties together Spark, a data-intensive framework for big data computation, and Swarm, the Docker clustering system. It targets data scientists who need to run their data analysis applications without having to worry about systems details. Zoe can execute long running Spark jobs, but also Scala or iPython interactive notebooks and streaming applications, covering the full Spark development cycle. When a computation is finished, resources are automatically freed and available for other uses, since all processes are run in Docker containers.

In this talk we are going to present why Zoe, the Container Analytics as a Service, was born, its architecture and the problems it tries to solve. Zoe would not be there without Swarm and Docker and we will also talk about some of the stumbling blocks we encountered and the solutions we found, in particular in transparently connecting Docker hosts through a physical network. Zoe was born as a research prototype, but is now stable and is currently being used to run real jobs from users in our research institution. Application scheduling on top of Swarm and optimized container placement will also be covered during the presentation.

You can find more information about Zoe here: http://www.zoe-analytics.eu

Spark notebooks in OpenStack

We’ve developed a beta version of Spark notebook support in our fork of OpenStack Sahara. With this version it is possible to have a development environment for Spark in a few clicks, ready to be used by a data scientist to develop his applications.

Spark Notebooks offer the same features as iPython notebooks, but with the distributed computing capabilities of Spark behind the scenes. They are programmed in Scala: code and text can be mixed, with plots and other data visualizations generated on the fly.

With our changes in Sahara, Noteboks processes can be deployed together with a Spark cluster and will be configuspark notebook screenshotred and ready to go.


New paper accepted at the Third International Workshop on In-memory Data Management and Analytics, VLDB 2015

Cost-based Memory Partitioning and Management in Memcached, by D. Carra, P. Michiardi and M. Steiner


In this work we present a cost-based memory partitioning and management mechanism for Memcached, an in-memory key-value store used as Web cache, that is able to dynamically adapt to user requests and manage the memory according to both object sizes and costs. We then present a comparative analysis of the vanilla memory management scheme of Memcached and our approach, using real traces from a major content delivery network operator. Our results indicate that our scheme achieves near-optimal performance, striking a good balance between the performance perceived by end-users and the pressure imposed on back-end servers.

Third International Workshop on In-memory Data Management and Analytics: http://imdm.ws/2015/