vlambda博客
学习文章列表

读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

Working with Ceph and OpenStack

在本章中,我们将介绍以下食谱:

  • Ceph – the best match for OpenStack
  • Setting up OpenStack
  • Configuring OpenStack as Ceph clients
  • Configuring Glance for Ceph backend
  • Configuring Cinder for Ceph backend
  • Configuring Nova to boot instances from Ceph RBD
  • Configuring Nova to attach Ceph RBD

Introduction

OpenStack 是一个开源软件平台,用于构建和管理公共和私有云基础架构。它由一个名为 The OpenStack Foundation 的独立非盈利基金会管理。它拥有最大、最活跃的社区,并得到惠普、红帽、戴尔-EMC、思科、IBM、Rackspace 等技术巨头的支持。

OpenStack 对云的想法是,它应该易于实现且可大规模扩展。

OpenStack 被认为是一种云操作系统,允许用户以自动化的方式即时部署数百个虚拟机。它还提供了一种对这些机器进行无忧管理的有效方式。 OpenStack 以其动态纵向扩展、横向扩展和分布式架构功能而闻名,使您的云环境稳健且面向未来。 OpenStack 为您的所有云需求提供企业级基础设施即服务 (IaaS) 平台。如下图所示,OpenStack 由几个不同的软件组件组成,它们协同工作以提供云服务:

读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

在所有这些组件中,在本章中,我们将重点介绍分别提供块存储和图像服务的 Cinder 和 Glance。有关 OpenStack 组件的更多信息,请访问 http://www.openstack.org/.

Ceph – the best match for OpenStack

OpenStack 的采用率继续以惊人的速度增长,而且它非常受欢迎,因为它基于广泛的软件定义,无论是计算、网络还是存储。当你谈到 OpenStack 的存储时,Ceph 会得到所有的关注。 2017 年 4 月进行的一项 OpenStack 用户调查显示,Ceph 以高达 65% 的使用率和 41% 的使用率主导着块存储驱动程序市场(来源 - https://www.openstack.org/assets/survey/April2017SurveyReport.pdf) 比下一个存储驱动。

Ceph 提供了 OpenStack 一直在寻找的健壮、可靠的存储后端。它与 Cinder、Glance、Nova 和 Keystone 等 OpenStack 组件的无缝集成为 OpenStack 提供了一体化的云存储后端。以下是使 Ceph 最适合 OpenStack 的一些关键优势:

  • Ceph 以非常低的每 GB 成本提供企业级、功能丰富的存储后端,这有助于降低 OpenStack 云部署价格

  • Ceph 是适用于 OpenStack 的块、文件或对象存储的统一存储解决方案,允许应用程序根据需要使用存储

  • Ceph 为 OpenStack 云提供先进的块存储功能,包括轻松快速地生成实例以及备份和克隆虚拟机

  • 它为 OpenStack 实例提供默认持久卷,可以像传统服务器一样工作,其中数据不会在重新启动 VM 时刷新

  • Ceph 通过支持 VM 迁移、在不影响 VM 的情况下扩展存储组件来支持 OpenStack 独立于主机

  • 它为 OpenStack 卷提供快照功能,也可以用作备份的手段

  • Ceph 的 copy-on-write 克隆功能使 OpenStack 可以一次启动多个实例,这有助于配置机制更快地运行< /跨度>

  • Ceph 支持 Swift 和 S3 对象存储接口的丰富 API

Ceph 和 OpenStack 社区继续努力使集成更加无缝,并在新功能出现时利用它们。

OpenStack 是一个模块化系统,具有用于一组特定任务的独特组件。有几个组件需要可靠的存储后端,例如 Ceph 并将其完全集成,如下图所示 :< /span>

读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

这些组件中的每一个都以自己的方式使用 Ceph 来存储块设备和对象。大多数基于 OpenStack 和 Ceph 的云部署使用 Cinder、Glance 和 Swift 与 Ceph 的集成。当您在 Ceph 后端需要与 S3 兼容的对象存储时,使用 Keystone 集成。 Nova 集成允许从 OpenStack 实例的 Ceph 卷功能启动。

Setting up OpenStack

OpenStack 设置和配置超出了本书的范围,然而,为了便于演示,我们将使用预装了 OpenStack RDO Juno 版本的虚拟机。如果您愿意,您也可以使用自己的 OpenStack 环境并执行 Ceph 集成。

How to do it...

在这个秘籍中,我们将演示使用 Vagrant 设置一个预配置的 OpenStack 环境并通过 CLI 和 GUI 访问它:

  1. Launch openstack-node1 using vagrantfile as we did for Ceph nodes in the last chapter. Make sure that you are on the host machine and are under the Ceph-Cookbook-Second-Edition repository before bringing up openstack-node1 using Vagrant:
        # cd Ceph-Cookbook-Second-Edition
        # vagrant up openstack-node1
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Log in to the OpenStack node with the following vagrant command:
        # vagrant status openstack-node1
        $ vagrant ssh openstack-node1
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. We assume that you have some knowledge of OpenStack and are aware of its operations. We will source the keystone_admin file, which has been placed under /root, and to do this, we need to switch to root:
        $ sudo su -
        $ source keystonerc_admin

我们现在将运行一些本机 OpenStack 命令以确保正确设置 OpenStack。请注意,其中一些命令不显示任何信息,因为这是一个全新的 OpenStack 环境并且没有创建实例或卷:

      # nova list
      # cinder list
      # glance image-list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. You can also log into the OpenStack horizon web interface (https://192.168.1.111/dashboard) using the username admin and the password vagrant:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. After logging in, the Overview page opens:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
OpenStack Dashboard Overview

Configuring OpenStack as Ceph clients

OpenStack 节点应配置为 Ceph 客户端,以便访问 Ceph 集群。为此,请在 OpenStack 节点上安装 Ceph 包并确保它可以访问 Ceph 集群。

How to do it...

在这个秘籍中,我们将把 OpenStack 配置为 Ceph 客户端,稍后将用于配置 Cinder、Glance 和 Nova:

  1. Install ceph-common the Ceph client-side package in OpenStack node and then copy ceph.conf from ceph-node1 to the OpenStack node – os-nod1.
  2. Create an SSH tunnel between the monitor node ceph-node1 and OpenStack os-node1:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Copy the Ceph repository file from ceph-node1 to os-node1:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Install ceph-common package in os-node1:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Once it completes, you will have the following message:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Copy ceph.conf from ceph-node1 to os-node1:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Create Ceph pools for Cinder, Glance, and Nova from monitor node ceph-node1. You may use any available pool, but it's recommended that you create separate pools for OpenStack components:
         # ceph osd pool create images 128
         # ceph osd pool create volumes 128
         # ceph osd pool create vms 128
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
我们使用 128 作为这三个池的 PG 编号。 对于 PG 计算,对于您的池,您可以使用 Ceph PGcalc 工具 http://ceph.com/pgcalc/
  1. Set up client authentication by creating a new user for Cinder and Glance:
        # ceph auth get-or-create client.cinder mon 'allow r' osd 
          'allow class-read object_prefix rbd_children, allow rwx pool=volumes,
           allow rwx pool=vms, allow rx pool=images'
        # ceph auth get-or-create client.glance mon 'allow r' osd 
          'allow class-read object_prefix rbd_children, allow rwx pool=images'
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Add the keyrings to os-node1 and change their ownership:
        # ceph auth get-or-create client.glance | 
          ssh os-node1 sudo tee /etc/ceph/ceph.client.glance.keyring
        # ssh os-node1 sudo chown glance:glance 
          /etc/ceph/ceph.client.glance.keyring
        # ceph auth get-or-create client.cinder | 
          ssh os-node1 sudo tee /etc/ceph/ceph.client.cinder.keyring
        # ssh os-node1 sudo chown cinder:cinder 
         /etc/ceph/ceph.client.cinder.keyring
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. The libvirt process requires accessing the Ceph cluster while attaching or detaching a block device from Cinder. We should create a temporary copy of the client.cinder key that will be needed for the Cinder and Nova configuration later in this chapter:
        # ceph auth get-key client.cinder | 
          ssh os-node1 tee /etc/ceph/temp.client.cinder.key
  1. At this point, you can test the previous configuration by accessing the Ceph cluster from os-node1 using the client.glance and client.cinder Ceph users.
    Log in to os-node1 and run the following commands:
        $ vagrant ssh openstack-node1
        $ sudo su -
        # ceph -s --id glance 
        # ceph -s --id cinder 
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Finally, generate UUID, then create, define, and set the secret key to libvirt and remove temporary keys:
    1. Generate a UUID by using the following command:
                # cd /etc/ceph
                # uuidgen
    1. Create a secret file and set this UUID number to it:
                cat > secret.xml <<EOF <secret ephemeral='no' private='no'>
                <uuid>e279566e-bc97-46d0-bd90-68080a2a0ad8</uuid>
                <usage type='ceph'>
                <name>client.cinder secret</name>
                </usage>
                </secret>
                EOF

确保使用您自己的 UUID 为您的环境生成。< /span>

读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
    1. Define the secret and keep the generated secret value safe. We will require this secret value in the next steps:
                # virsh secret-define --file secret.xml
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
    1. Set the secret value that was generated in the last step to virsh and delete temporary files. Deleting the temporary files is optional; it's done just to keep the system clean:
                # virsh secret-set-value 
                --secret e279566e-bc97-46d0-bd90-68080a2a0ad8 
                --base64 $(cat temp.client.cinder.key) && 
                rm temp.client.cinder.key secret.xml
                # virsh secret-list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

Configuring Glance for Ceph backend

我们已经完成了 Ceph 端所需的配置。在这个秘籍中,我们将配置 OpenStack Glance 以使用 Ceph 作为存储后端。

How to do it…

这个秘籍讨论了配置 OpenStack 的 Glance 组件以在 Ceph RBD 上存储虚拟机映像:

  1. Log in to os-node1, which is our Glance node, and edit /etc/glance/glance-api.conf for the following changes:
    1. Under the [DEFAULT] section, make sure that the following lines are present:
                default_store=rbd
                show_image_direct_url=True
    1. Execute the following command to verify entries:
                # cat /etc/glance/glance-api.conf | 
                egrep -i "default_store|image_direct"
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
    1. Under the [glance_store] section, make sure that the following lines are present under RBD store options:
              stores = rbd
              rbd_store_ceph_conf=/etc/ceph/ceph.conf
              rbd_store_user=glance
              rbd_store_pool=images
              rbd_store_chunk_size=8
    1. Execute the following command to verify the previous entries:
              # cat /etc/glance/glance-api.conf | 
                egrep -v "#|default" | grep -i rbd
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Restart the OpenStack Glance services:
        # service openstack-glance-api restart
  1. Source the keystone_admin file for OpenStack and list the Glance images:
        # source /root/keystonerc_admin
        # glance image-list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Download the cirros image from the internet, which will later be stored in Ceph:
      # wget http://download.cirros-cloud.net/0.3.1/cirros-0.3.1-
        x86_64-disk.img
  1. Add a new Glance image using the following command:
        # glance image-create --name cirros_image --is-public=true 
          --disk-format=qcow2 --container-format=bare 
           < cirros-0.3.1-x86_64-disk.img
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. List the Glance images using the following command; you will notice there are now two Glance images:
        # glance image-list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. You can verify that the new image is stored in Ceph by querying the image ID in the Ceph images pool:
        # rbd -p images ls --id glance
        # rbd info images/<image name> --id glance
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Since we have configured Glance to use Ceph for its default storage, all the Glance images will now be stored in Ceph. You can also try creating images from the OpenStack horizon dashboard:

  1. Finally, we will try to launch an instance using the image that we have created earlier:
# nova boot --flavor 1 --image b1c39f06-5330-4b04-ae0f-0b1d5e901e5b vm1
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. You can check with the Nova list command:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
当您添加新的 Glance 镜像或从存储在 Ceph 上的 Glance 镜像创建实例时,您可以通过监控 Ceph 集群上的 IO 并使用 # watch ceph -s 命令。

Configuring Cinder for Ceph backend

OpenStack 的 Cinder 程序为虚拟机提供块存储。在这个秘籍中,我们将配置 OpenStack Cinder 以使用 Ceph 作为存储后端。 OpenStack Cinder 需要一个驱动程序来与 Ceph 块设备交互。在 OpenStack 节点上,通过添加以下部分中给出的代码片段来编辑 /etc/cinder/cinder.conf 配置文件。< /span>

How to do it...

在上一个秘籍中,我们学习了配置 Glance 以使用 Ceph。在这个秘籍中,我们将学习使用 Ceph RBD 和 OpenStack 的 Cinder 服务:

  1. Since in this demonstration we are not using multiple backend cinder configurations, comment the enabled_backends option from the /etc/cinder/cinder.conf file:
  2. Navigate to the options defined in cinder.volume.drivers.rbd section of the /etc/cinder/cinder.conf file and add the following (replace the secret UUID with your environments value):
        volume_driver = cinder.volume.drivers.rbd.RBDDriver
        rbd_pool = volumes
        rbd_user = cinder
        rbd_secret_uuid = e279566e-bc97-46d0-bd90-68080a2a0ad8
        rbd_ceph_conf = /etc/ceph/ceph.conf
        rbd_flatten_volume_from_snapshot = false
        rbd_max_clone_depth = 5
        rbd_store_chunk_size = 4
        rados_connect_timeout = -1
        glance_api_version = 2
  1. Execute the following command to verify the previous entries:
        # cat /etc/cinder/cinder.conf | egrep "rbd|rados|version" | 
          grep -v "#"
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Comment enabled_backend=lvm option in /etc/cinder/cinder.conf:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Restart the OpenStack Cinder services:
        # service openstack-cinder-volume restart
  1. Source the keystone_admin files for OpenStack:
        # source /root/keystonerc_admin
        # cinder list
  1. To test this configuration, create your first Cinder volume of 2 GB, which should now be created on your Ceph cluster:
        # cinder create --display-name ceph-volume01 
          --display-description "Cinder volume on CEPH storage" 2
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Check the volume by listing the Cinder and Ceph volumes pool:
        # cinder list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
        # rbd -p volumes ls --id cinder
        # rbd info volumes/volume-7443e2b6-0674-4950-9371-49094a1702a7 
          --id cinder
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Similarly, try creating another volume using the OpenStack horizon dashboard.

Configuring Nova to boot instances from Ceph RBD

为了将所有 OpenStack 实例引导到 Ceph,也就是说,对于 boot-from-volume 功能,我们应该为 Nova 配置一个临时后端。为此,请在 OpenStack 节点上编辑 /etc/nova/nova.conf 并执行以下更改.

How to do it…

这个秘籍涉及配置 Nova 以将整个虚拟机存储在 Ceph RBD 上:

  1. Navigate to the [libvirt] section and add the following:
        inject_partition=-2
        images_type=rbd
        images_rbd_pool=vms
        images_rbd_ceph_conf=/etc/ceph/ceph.conf
        rbd_user=cinder
        rbd_secret_uuid= e279566e-bc97-46d0-bd90-68080a2a0ad8
  1. Verify your changes:
        # cat /etc/nova/nova.conf|egrep "rbd|partition" | grep -v "#"
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Restart the OpenStack Nova services:
        # service openstack-nova-compute restart
  1. To boot a virtual machine in Ceph, the Glance image format must be RAW. We will use the same cirros image that we downloaded earlier in this chapter and convert this image from the QCOW to the RAW format (this is important). You can also use any other image, as long as it's in the RAW format:
# qemu-img convert -f qcow2 -O raw cirros-0.3.1-x86_64-disk.img cirros-0.3.1-x86_64-disk.raw
  1. Create a Glance image using a RAW image:
        # glance image-create --name cirros_raw_image 
          --is-public=true --disk-format=raw 
          --container-format=bare < cirros-0.3.1-x86_64-disk.raw
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. To test the boot from the Ceph volume feature, create a bootable volume:
        # nova image-list
        # cinder create --image-id 78e1fd35-aa65-447b-954e-1b072e9a17ec 
                        --display-name cirros-ceph-boot-volume 1
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. List Cinder volumes to check if the bootable field is true:
        # cinder list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Now, we have a bootable volume, which is stored on Ceph, so let's launch an instance with this volume:
    1. We have a known issue with qemu-kvm package which is causing nova boot to fail:
                Log - "libvirtError: internal error: process exited
                while connecting to monitor: ... Unknown protocol"
    1. We have the following packages installed in the os-node1 VM which have this issue:
                qemu-kvm-1.5.3-60.el7_0.11.x86_64
                qemu-kvm-common-1.5.3-60.el7_0.11.x86_64
                qemu-img-1.5.3-60.el7_0.11.x86_64
    1. Please upgrade the qemu-kvm, qemu-kvm-common and qemu-img packages:
                $ yum update qemu-kvm qemu-img -y
    1. It will install the following packages:
                qemu-kvm-common-1.5.3-141.el7_4.2.x86_64
                qemu-kvm-1.5.3-141.el7_4.2.x86_64
                qemu-img-1.5.3-141.el7_4.2.x86_64

读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

                # nova boot --flavor 1 --block_device_mapping
                 vda=d3a3eb50-6b3a-4a93-b90c-b17d01e10b64::0 
                --image 78e1fd35-aa65-447b-954e-1b072e9a17ec 
                  vm2_on_ceph
                --block_device_mapping vda = <cinder bootable volume id >
                --image = <Glance image associated with the bootable volume>
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Finally, check the instance status:
        # nova list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. At this point, we have an instance running from a Ceph volume. Let's do a boot from image:
        # nova boot --flavor 1 
                    --image 78e1fd35-aa65-447b-954e-1b072e9a17ec 
                      vm1_on_ceph
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Finally, check the instance status:
        # nova list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Check if the instance is stored in Ceph:
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack

Configuring Nova to attach Ceph RBD

为了将 Ceph RBD 附加到 OpenStack 实例,我们应该通过添加 RBD 用户和 UUID 信息来配置 OpenStack 的 Nova 组件它需要连接到 Ceph 集群。为此,我们需要在 OpenStack 节点上编辑 /etc/nova/nova.conf 并执行下一节。

How to do it...

我们在上一个秘籍中配置的 Cinder 服务会在 Ceph 上创建卷,但是,要将这些卷附加到 OpenStack 实例,我们需要配置 Nova:

  1. We have already configured the following options to enable volume attachment:
        rbd_user=cinder
        rbd_secret_uuid= e279566e-bc97-46d0-bd90-68080a2a0ad8
  1. To test this configuration, we will attach the Cinder volume to an OpenStack instance. List the instance and volumes to get the ID:
        # nova list
        # cinder list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack
  1. Attach the volume to the instance:
# nova volume-attach 31000c20-5847-48eb-b2e3-6b681f5df46c 21bb3f31-6f9e-4d26-9afd-5eec3f450034
# cinder list
读书笔记《ceph-cookbook-second-edition》使用Cave和OpenStack