Nowadays the world of the Cloud comes more and more closer to the everyday life of the developers. In my new job I was tasked with the design of a cloud solution. While already new some distributed stuff and played around with some of the so called PaaS providers like OpenShift and AWS . But know the problem would be how to even more efficiently share resources. Few years back I have crossed Vagrant. A nice idea , virtually managing a VM. The down side a major one. You can bind 1 application container per VM. So I have heard like more than a year ago from @dandreadis about this new kid in town called Docker .
The above for running in a development mode even in your own local machine is more than sufficient. But what happens if you want to deploy a distributed/cluster system with a consensus protocol that will ensure high availability and manageability of your cluster? There are some solutions , rather young I would say, but working really nice. So to keep the story short I decided to work with Apache Mesos , Marathon and Zookeeper. An alternative to the above could be the usage of Kubernetes project by google to replace Marathon. Being a greek I enjoy alot that most of these fancy software have greek names. The reason? Clearly everything we start using comes from the fundamental work done in consensus protocols by Leslie L’ Amport,i.e. The Paxos Protocol. If you don’t know about ti I would suggest reading some of his paper since you get familiar with many terms you may encounter in the above frameworks.BTW he was awarded the Turing Award in 2013.
In order to play around you need the following software tools:
- Virtual Box
- Ubuntu server / Or any flavour of Linux distr. you prefer. I will focus and showcase Ubuntu latest LTS server edition support.
- Basic knowledge of package management solutions for the distro you prefer like apt / yum
- You need at least g++4.8 if you want to compile from sources, at least that’s what I do(Apache marathon has a core dependency on it after version 0.20.1) where also docker support as container is supported.
All the above can be configured by using only services either from google cloud , AWS or mesosphere. Google it a bit 😉
First step is to download and install Virtual Box. Once you have it you need to create 7 instance of ubuntu server you have to assign some cpu horse power and memory to the vm’s where initially you will compile Mesos. If not the compilation procedure may take up to 1 hour. I hope this step is really easy and I consider you have finished by now. The following steps I will describe can be automated for massive installations using Puppet(at least that’s what you are supposed to do in production clustering.).
Fire up the 5 out of 7 linux machines and issue the following commands:
Installation of docker:
>>sudo apt-get install docker.io
The above will install docker and will allow you to talk to it over TCP. In theory is not suggested you should use unix domain sockets but for this tutorial makes things easier to handle :).
Installation of Apache Mesos:
>>sudo apt-get install -y build-essentials openjdk-7jdk maven libsvn-dev python python-boto libsasl2-dev libcurl4-nss-dev libapr1-dev
>> tar xvf mesos-0.21.0.tar.gz
>>make && make install
And you are done. Follow the same steps when downloading marathon but no need for compilation is needed it.Zookeeper. The installation of zookeeper is straightforward again fire up the 2 nodes(they will form your ensable). And once everything is installed you are ready to go on with the configuration.Please note that in the zoo.cfg the number of server.1/2 must be put in a myid file leaving inside where the root folder of the Zookeeper is configured to exist to keep the replicated state of the cluster and the respective applications. In this example we don’t deal with security concerns on the ensable of the Zookeeper nodes. If you go in production you should secure it. For marathon simply unzip the marathon compressed folder in the master nodes and fire it up.
To start the two mesos master cd to the $MESOS_HOME and issue the following command:
–work_dir=/where/mesos/installed –zk=zk://:2181,:2181,/app/mesos –quorum=1 –cluster=
If you run the above commands in both master mesos nodes then zookeeper will run the leader election algorithm and will decide who is the active master that will be served to the slaves that will ask him on who is the active master for the cluster.As a test case you can bring the active master down and watch the logs of the slaves you will see that it fails but then Zookeeper will run the election algorithm and notify the slaves automatically for the new available active master node of the cluster of master nodes.
Select two of the machines where you compile docker they will act as your master mesos nodes.A master mesos node send payload to the slave ones to execute.In our case we will be feeding commands to the master and he will dispatch the creation of our application containers to the slaves. The docker instance will live to the slave nodes(3 the number). Remember that mesos master and So fire up a terminal now in both images and issue the following commands
Once you have done that you must go to the console of the master mesos node and on port 5050 you will see that the 3 newly slaves are registered and have offered their resources to he pool of the master node. In the same ip you can access the marathon interface by changing to port 8080. In order to test your configuration inside the unzipped folder of marathon under the examples folder you will find some preconfigured JSON payloads which can start some containers and do something. Feel free to experiment and enjoy.
The above installation can be simplified if you use the packages from mesosphere repository. But then you have to read a bit the documentation offered by them on where they put the configuration files. Most of the time they create also the necessary services and the important files leave under
/etc/default/mesos-master :: configuration for the master only to be read by mesos-master service
/etc/default/mesos-slave :: configuration for the slave only to be read by the mesos-slave service.
/etc/default/mesos :: configuration name=value which is read by both configuration. Equivalent to common switches for both master and slave nodes.
/etc/default/zk :: From wher mesos will read the -zk= value.
/etc/zookeeper:: The configuration of zookeeper.