Deploying a Simple Ceph Cluster to a Baserock Openstack Server

These instructions will guide you through deploying a simple three node ceph cluster, to a baserock openstack server. The Ceph cluster is for demonstration purposes and is small and simple. It consists of one monitor node (MON), and two object storage daemon (OSD) nodes.

Pre-req

To follow this walkthrough you will require:

  • A baserock development system
  • A baserock openstack server deployed to baremetal

This work was tested on a Lenovo H520s 2561 with 8GB RAM and a second HDD to use as cinder storage, but should work with similar specs. The recomended minimum hardware is:

  • quad core processor
  • 8GB RAM
  • >= 30GB Boot disk
  • >= 20GB Storage disk for cinder

NB: Is this information inapropriate

Baserock Devel System (BDS)

If you already have a BDS move on to the next section.

If you dont have one you can set one up by following walkthroughs:

  1. Create a Baserock VM - This should take you through downloading a build system and setting it up as a virtual machine.
  2. Quick Start - This page should take you through the steps of using your Baserock system and upgrading from a basic build system to a development system.

Once you have a development machine up and running you can use it to build and deploy a baserock openstack server system and your ceph cluster too.

Baserock Openstack Server System (BOSS)

If you already have a BOSS set up you can move onto the next section. However it is advised that you set up the demo tenant as described below as this walkthrough uses that tenancy as a guide.

If you have not already got a baserock openstack system you can checkout the master set of definitions and build the systems/openstack-system-x86_64.morph.

You can then follow the deployment instructions to deploy a system in a single node. NB. you should update the single node deployement to set the following parameter NOVA_VIRT_TYPE: kvm. You will also need to run the command sed -i '/template/d' ./openstack/manifest before deploying the system too.

When this image has deployed you need to transfer the baserock image to the boot disk of your hardware. Ive been this by dding the image to a HDD via a HDD caddy, but if you know a simpler way do that instead!

After booting your openstack system you should follow the "Post-deployment networking configuration", "Authorisation environment variable setup", and "Physical network configuration" instructions given in the deployment instructions linked above.

Next, you should set up the demo tenancy as described here, stopping after the section entitled "Routing a private network to the external network".

Building the ceph service

On your BDS, clone the set of definitions we've prepared for a ceph cluster deploy to openstack:

git clone https://github.com/padrigali/definitions.git --branch ceph-master

Relocate to the root directory and run a system build:

cd definitions/   
morph build systems/ceph-service-x86_64-generic.morph

While this is going on there should be plenty of time to ready your BOSS

Prep your openstack

On your BOSS the following preperations should be made for the arival of the ceph cluster.

  • Increase the BOSS size
  • Set an ip for the monitor node
  • Create volumes for the ceph object storage nodes.

To allow interaction with the demo tenancy you should source the demo credentials:

. demorc

Increase BOSS size

The image size set in the cluster size for the BOSS is too small to host the images we need to deploy to it. To increase the size to accomodate these images we can run the following command:

btrfs filesystem resize max /

Reserve a monitor ip

In order for the monitor to configure on firstboot, we need to define its IP
at deploy time.

To see the IPs already in use in the project, you can check the neutron port-list and create a port with an unused ip. (If you are utilising the BOSS via ssh, you may need to run export LC_ALL=C for neutron commands to work). If the openstack-server is fresh then there should be plenty of choice.

To create the ip 192.168.1.5 on demo-net, you should run the following command:

neutron port-create demo-net --fixed-ip \
    subnet_id=`neutron subnet-list | awk '/demo-subnet/ {print $2}'`,ip_address=192.168.1.5

Now we have a port that we can asscociate with the ceph monitor node, we will refer to it later as the "MON IP". We don't need predetermined ips for the osd nodes, we can let OS assign these according to its whims.

Create osd storage

We need some virtual disks to attach to the object storage nodes to be used for storage. To create them run these commands:

nova volume-create 10 --display-name osd-storage-0
nova volume-create 10 --display-name osd-storage-1

Preparing and Deploying the Cluster

If your BDS is still building the ceph system, patiently wait for it to complete, this time is your own. If the build has completed you can move on to the next step.

To deploy a ceph cluster to the baserock openstack you will need the folowing three files.

  • A cluster morphology for the deployment
  • A configuration file for the ceph cluster
  • An administration keyring for the ceph cluster

Templates have been provided for these files.

Cluster morphology

The template file is clusters/ceph-cluster-deploy-openstack.morph to ready this for use you just need to replace the string BOSS.IP.ADDRESS.HERE with the IP ascociated with your BOSS.

Ceph configuration file

The ceph configuration file is located in the root of definitions, at ceph.conf. Two edits need to be made:

  • Replace the string REPLACE_WITH_UUID with the output of a uuidgen command
  • Replace the string REPLACE.WITH.MON.IP with the MON IP that was generated above.

Ceph admin keyring file

The client admin keyring template is included as ceph.client.admin.keyring. For use in the demo system it needn't be altered in any way, but it is highly recomended to generate your own admin keyring for use in production systems.

This can be done on any machine with ceph installed using the command:

ceph-authtool --create-keyring /tmp/ceph.client.admin.keyring \
    --gen-key -n client.admin --set-uid=0 --cap mon 'allow *' \
    --cap osd 'allow *' --cap mds 'allow'

Deploying the system

Before you must first add the following line to your BDS's /etc/hosts:

BOSS.IP.ADDRESS.HERE BOSS_hostname

The system can now be deployed using the command:

morph deploy clusters/ceph-cluster-deploy-openstack.morph

This may take some time.

Instanciating the ceph cluster

Switch to your BOSS and source the demorc credetials so that the CLI clients can be used to operate the openstack server:

. demorc

Add the monitor node

You can add the monitor node with the following command, you should include the MON IP.

nova boot mon-0-vm --flavor m1.small \
    --image `nova image-list | awk '/mon-0/ {print $2}'` \
    --security-groups default \
    --nic port-id=`neutron port-list | awk '/MON_IP/ {print $2}'`

You can check its working by logging onto the horizon web interface. Just point your browser to the BOSS ip and log in with the demo credentials.

Switching to the console tab you can log into the baserock system with as the root user.

Running the command ceph -sw will watch the status of the ceph cluster. The health of the cluster may show a warning, but this will change when the osds are added. Leave this open and we will come back to it.

Add the OSD nodes

You can add the OSD nodes with the following commands:

nova boot osd-0-vm --flavor m1.small \
    --image `nova image-list|awk '/osd-0/ {print $2}'` \
    --block-device source=volume,id=`nova volume-list|awk '/osd-storage-0/ {print $2}'`,dest=volume,shutdown=preserve \
    --nic net-id=`nova net-list|awk '/demo-net/ {print $2}'`

and

nova boot osd-1-vm --flavor m1.small \
    --image `nova image-list|awk '/osd-1/ {print $2}'` \
    --block-device source=volume,id=`nova volume-list|awk '/osd-storage-1/ {print $2}'`,dest=volume,shutdown=preserve \
    --nic net-id=`nova net-list|awk '/demo-net/ {print $2}'`

Now switch back to your browser pointing to the console of the monitor. Once the osd instances have finished spawning you should see some activity as the pgs start to distribute to the osds. When this commotion has abated, you can cancel the watched command and run a ceph -s you should see the health of the cluster should now be ok.

We have a ceph cluster!

Next steps

For more information on ceph and how to use your ceph cluster see the ceph documentation.

For more fun things to do with baserock see the baserock guides.