Latest Articles:

Configuring CORS for Gnocchi and Keystone

/!\ Since mitaka, this article is obsolete, see Gnocchi documentation for updated information /!\

In some case Cross-origin resource sharing (CORS) is required on some REST API to work with certain applications.

For example, if you want to use Grafana with the Gnocchi datasource, some setup are required on the Gnocchi and Keystone side to allow grafana UI to access to Gnocchi REST API.

All of this configuration have can be done easly since Liberty version of Openstack and was not yet supported in previous version (at least in this way).

Since Liberty, we can use the oslo.middleware CORS middleware without waiting it’s CORS integration into a Openstack application. In mitaka, Keystone and Gnocchi got CORS integration out of the box and the following modifications are no more needed.

Note that on my example, my Grafana server is http://my-grafana-ipdomain:3000.

On Keystone side

Add a new filter to the ...

Read More

Autoscaling with Heat, Ceilometer and Gnocchi

A while ago, I had made a quick article/demo of how to use Ceilometer instead of the built-in emulated Amazon CloudWatch resources of Heat.

To extend on the previous post, when you create a stack, instances of the stack generated notifications that were received by Ceilometer and converted into samples to be written to a database; usually MongoDB. On the other end, Heat created some alarms using the Ceilometer API to trigger the Heat autoscaling actions. These alarms defined some rules against statistics based on the previously recorded samples. These statistics were computed on the fly when the alarms were evaluated.

The main issue with this setup was that the performance for evaluating all the defined alarms was directly tied to the number of alarms and to the complexity of computing the statistics. The computation of a statistic would result in a map reduce in MongoDB. Therefore, when there ...

Read More

Writing a Gnocchi storage driver for ceph

As presented by Julien Danjou, Gnocchi is designed to store metric metadata into an indexer (usually a SQL database) and store the metric measurements into another backend. The default backend creates timeseries using Carbonara (a pandas based library) and stores them into Swift.

The storage Gnocchi backend is pluggable, and not all deployments install Swift, so I have decided to write another backend, I have chosen Ceph because it’s close to Swift in the way we use it in Gnocchi and scale well when we have many objects stored too. Now, let’s see what I need to do to reach this goal.

Storage driver interface

The current metric storage driver interface to implement looks like this:

  • create_metric(metric, archive_policy)
  • add_measures(metric, measures)
  • get_measures(metric, from_timestamp=None, to_timestamp=None, aggregation=’mean’)
  • delete_metric(metric)
  • get_cross_metric_measures(metrics, from_timestamp=None, to_timestamp=None, aggregation=’mean’, needed_overlap=None)

Cool, not so many methods to ...

Read More

Autoscaling with Heat and Ceilometer

Like AWS CloudFormation, Heat allows to create auto scaling stacks. In order to do this, some metrics need to be retrieved from your VM and some actions need to be triggered when a specified event occurs on these metrics. These actions are usually upscaling (create some new VMs) or downscaling (destroy some VMs).

In OpenStack Grizzly, a simplistic system was put in place that could mostly serve demonstration needs but could not be used for most real world scenarios: the metrics were retrieved via an agent running inside each VM associated to the Auto Scaling group, and were stored in the Heat database. And the alarms that are triggered up/down auto scaling were stored in the Heat database too.

Thinking about it, it seemed clear that Heat should be a tool which only does orchestration on different API, not storing/handling alarms and metrics, under the UNIX old principle ...

Read More

Using a sharding/replicaSet mongodb with ceilometer

Ceilometer aims to deliver a unique point of contact to acquire all samples across all OpenStack components.

The most commonly used backend is mongodb, this is the only one which implements all ceilometer features.

Due to its nature, collecting many samples from multiple sources, the ceilometer database grows quickly.
For havana, to handle this we have:

I will focus on the second part with an example of mongodb architecture/setup to distribute the ceilometer data into different servers with two replicats of the data.

My setup


  • Three servers for the mongodb config servers and mongos routing servers (mongos[1,3])
  • Four servers for the mongodb data servers (two shard with two replicaset) (mongod[1,4])


  • Ubuntu Precise
  • Openstack havana-3
  • Mongodb 2.4 (from 10gen repository)

Openstack is installed on ...

Read More

Create a smart debian repository with reprepro

reprepro is “a tool to manage a repository of Debian packages (.deb, .udeb, .dsc, …). It stores files either being injected manually or downloaded from some other repository (partially) mirrored into one pool/ hierarchy”

So my goal is ot have a local repository with:

  • a partial mirror of percona galera cluster
  • a partial mirror of dell openmanage (bah not opensource :( )
  • some packages from sid and squeeze (openssl and apt-cacher-ng)
  • some packages from my self

The directory of the repository will be in /var/www/debian/ to export it easly with a webserver.

Installation and configuration

Start to install the reprepro package and create the repository directories:

# apt-get install reprepro
# mkdir -p /var/www/debian/{conf,temp,incoming}
# cd /var/www/debian

Reprepro needs some configuration files:

  • conf/distributions to descibe the repository layout and the supported distributions and components
  • conf/updates to describe the list of upstream repository where reprepro can ...
Read More

Galera and puppetlabs mysql module

Recently I have tested mysql galera on debian, here a quick description of the installation of galera with the puppetlabs-mysql This will allow to use galera for all applications that setup mysql with this module.

Start by add the percona repository of mysql galera

class mygalera_klass{

$local_ip = $ipaddress_eth0

# apt repository
apt::source { "galera_percona_repo":
    location          => ""
    release           => "squeeze",
    repos             => "percona",
    key               => "CD2EFD2A"
    include_src       => false,

We define what is the initial galera node and what are the other nodes in the galera ring:

$galera_nextserver = {
    "galera1.lan" => "",  # replace the empty string by $ipaddress_galera3 when all nodes are correclty setup
    "galera2.lan" => "$ipaddress_galera1",
    "galera3.lan" => "$ipaddress_galera2",
$galera_master = "galera1.lan"

Note: This part can be sexiest if puppetdb is used

Setup the mysql server class, it allow to change the package name of the mysql database

# mysql definifition
class { 'mysql::server':
  package_name => "percona-xtradb-cluster-server-5.5",
  config_hash => {
    bind_address  => $local_ip,
    root_password => "myrootpassword"
  require => Apt ...
Read More

Adding schroot support to piuparts

For my work on the Openstack packages, I have decided to make some tests with piuparts. So I have installed piuparts, read the manual, and made my first test.
By default, piuparts uses deboostrap to make chroot environment for packages testing. Other methods are pbuilder or lvm snapshot.

But, I have a setup with schroot+aufs to test my Debian packages and piuparts doesn’t support it, then I will add it !

After digging into the code, making a first patch, I have contacted the upstream via the bug #530733 and followed some advise from my upstream-university curriculum.
Upstream review my code very quickly (thanks to it :-)). After some mail exchanges and code fixes, my patch is now upstream :)

Read More

From the to a looking glass for bird

During one year of participation in the project, a French Associative Internet Service Provider and Host Provider, I have set up a mini cloud with ganeti (~100 virtual machines), I have integrated CheckMK for monitoring the ISP infrastructure and I have done some other system administration tasks. daily uses and promotes opensource softwares, this is why it uses Bird on its BGP router. So, I have begun to work on a Web Looking Glass for this BGP router: bird-lg.

The software is not ready yet for an easy setup, but it works fine.
The tool is now in production for at this address:

The main features are the classic “show route” and the bgpmap.
The technologies used are Flask, jQuery and Bootstrap.

Read More

First step in my involvement in openstack

Due to a recent interest for the Openstack project, I have begun to test and to do some work around it.

I have started to setup a cluster of four nodes (1 proxy/volume and 3 computes) on Debian by following this guide (Or this one if you love Puppet and if you are in a hurry)

The next step was to dive into the code, to do so I have updated and created new Munin plugins for openstack .
This one can make beautiful graphs about many metrics of Openstack

After some tiny problems encountered with the Debian packaging, I have taken a look on it and started to fix some of them.
To have more chance to see my changes committed to upstream, I have followed some steps:
- Join the Alioth project teams
- Subscribe to the mailing list
- Clone the Debian git repository of Openstack
- Hack the package (And ...
Read More