your home for end-user virtualization!

Configuring OpenStack Havana Cinder, Nova and Glance to run on GlusterFS

Configuring Glace, Cinder and Nova for OpenStack Havana to run on GlusterFS is actually quite simple; assuming that you’ve already got GlusterFS up and running.

So lets first look at my Gluster configuration. As you can see below, I have a Gluster volume defined for Cinder, Glance and Nova.

[root@g1 ~(keystone_admin)]# gluster volume info
Volume Name: cinder
Type: Replicate
Volume ID: d71d0ab7-2c99-41c5-8495-fd68d1571f31
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: g1.gluster.local:/mnt/glusterfs/cinder
Brick2: g2.gluster.local:/mnt/glusterfs/cinder
Brick3: g3.gluster.local:/mnt/glusterfs/cinder
Options Reconfigured:
auth.allow: 127.0.0.1

Volume Name: nova
Type: Replicate
Volume ID: 546ba71c-1de9-4e48-8c8b-2ab30ea04a58
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: g1.gluster.local:/mnt/glusterfs/nova
Brick2: g2.gluster.local:/mnt/glusterfs/nova
Brick3: g3.gluster.local:/mnt/glusterfs/nova
Options Reconfigured:
auth.allow: 127.0.0.1

Volume Name: glance
Type: Replicate
Volume ID: f3e3579c-c229-4a5e-a3f1-24c06dd15350
Status: Started
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: g1.gluster.local:/mnt/glusterfs/glance
Brick2: g2.gluster.local:/mnt/glusterfs/glance
Brick3: g3.gluster.local:/mnt/glusterfs/glance
Options Reconfigured:
auth.allow: 127.0.0.1
[root@g1 ~(keystone_admin)]#

And I have each of these filesystems mounted.

#
# /etc/fstab
# Created by anaconda on Tue Jan 7 17:00:39 2014
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/VolGroup-lv_root / ext4 defaults 1 1
UUID=457eb5bb-79a7-417b-aaf9-ddc49fcec33d /boot ext4 defaults 1 2
/dev/mapper/VolGroup-lv_home /home ext4 defaults 1 2
/dev/mapper/VolGroup-lv_swap swap swap defaults 0 0
tmpfs /dev/shm tmpfs defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
sysfs /sys sysfs defaults 0 0
proc /proc proc defaults 0 0
172.16.56.42:/srv/nfs/files /mnt/nfs nfs defaults 0 0
/dev/sdb1 /mnt/glusterfs xfs defaults 0 0
127.0.0.1:/cinder /mnt/cinder glusterfs defaults,_netdev 0 0
127.0.0.1:/nova /mnt/nova glusterfs defaults,_netdev 0 0
127.0.0.1:/glance /mnt/glance glusterfs defaults,_netdev 0 0

Now that I have OpenStack Cinder, Nova and Glance installed; I can then configure them to use my Gluster mounts.

Modify /etc/cinder/cinder.conf to reflect the Gluster configuration

#
# Options defined in cinder.volume.drivers.glusterfs
#

# File with the list of available gluster shares (string
# value)
#glusterfs_shares_config=/etc/cinder/glusterfs_shares
glusterfs_shares_config=/etc/cinder/shares.conf

# Use du or df for free space calculation (string value)
glusterfs_disk_util=df

# Create volumes as sparsed files which take no space.If set
# to False volume is created as regular file.In such case
# volume creation takes a lot of time. (boolean value)
glusterfs_sparsed_volumes=true

# Create volumes as QCOW2 files rather than raw files.
# (boolean value)
#glusterfs_qcow2_volumes=false

# Base dir containing mount points for gluster shares. (string
# value)
glusterfs_mount_point_base=$state_path/mnt

Also make sure that /etc/cinder/shares.conf has the Gluster share listed

127.0.0.1:/cinder

Then, create the images folder for Glance.

mkdir /mnt/glance/images

Then, modify the file permissions so that its usable.

chown -R glance:glance /mnt/glance/images

Modify the glance configuration to reflect the Gluster mount points in /etc/glance/glance-api.conf

filesystem_store_datadir = /mnt/glance/images

Restart Glance Services

service openstack-glance-api restart

Create nova folder structure

mkdir /mnt/nova/instance

Then, modify the file permissions so that its usable.

chown -R nova:nova /mnt/nova/instance

Modify nova config in /etc/nova/nova.conf to reflect the Gluster mount points.

instances_path = /mnt/nova/instance

Restart nova

service openstack-nova-compute restart

Verify OpenStack services

openstack-status

You should now see that Cinder has the GlusterFS mount mounted when issuing the mount command:

[root@g1 cinder(keystone_admin)]# mount
/dev/mapper/VolGroup-lv_root on / type ext4 (rw)
proc on /proc type proc (rw)
sysfs on /sys type sysfs (rw)
devpts on /dev/pts type devpts (rw,gid=5,mode=620)
tmpfs on /dev/shm type tmpfs (rw)
/dev/sda1 on /boot type ext4 (rw)
/dev/mapper/VolGroup-lv_home on /home type ext4 (rw)
/dev/sdb1 on /mnt/glusterfs type xfs (rw)
none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw)
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw)
172.16.56.42:/srv/nfs/files on /mnt/nfs type nfs (rw,addr=172.16.56.42)
127.0.0.1:/nova on /mnt/nova type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
127.0.0.1:/glance on /mnt/glance type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)
127.0.0.1:/cinder on /var/lib/cinder/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072)

Note: 127.0.0.1:/cinder on /var/lib/cinder/mnt/92ef2ec54fd18595ed18d8e6027a1b3d type fuse.glusterfs (rw,default_permissions,allow_other,max_read=131072

Tags: , , , , ,

Search

Categories