vlambda博客
学习文章列表

读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石

Upgrading Your Ceph Cluster from Hammer to Jewel

在本章中,我们将介绍以下秘籍:

  • Upgrading your Ceph cluster from Hammer to Jewel

Introduction

现在您已经阅读了 Ceph Cookbook – Second Edition 并阅读了本书中的食谱以熟悉 Ceph Jewel 以及 Ceph Jewel 版本中的更改,是时候升级您当前的运行了从 Ceph Hammer 到最新的稳定 Jewel 的生产集群!在本章中,我们将演示在线滚动更新过程,带您从 Hammer 版本 (0.94.10) 到 Jewel。我们将按顺序升级集群中的每个节点,只有在前一个节点完成后才继续升级到下一个节点。建议您按以下顺序更新 Ceph 集群节点:

  1. Monitor nodes.
  2. OSD nodes.
  3. MDS nodes.

Upgrading your Ceph cluster from Hammer to Jewel

在这个秘籍中,你将把你的 Ceph 集群从 Hammer 升级到 Jewel。

How to do it...

我们会将运行 Ceph Hammer 0.94.10 的 CentOS 7 服务器升级到最新的稳定版 Jewel 版本 10.2.9。

Upgrading the Ceph monitor nodes

升级 Ceph 监视器是一个简单的过程,应该一次完成一个监视器。概括地说,您需要启用 Jewel 存储库,更新监视器目录的权限,然后重新启动监视器以完成升级。让我们进一步详细了解如何升级 Ceph 监视器:

  1. Update to Ceph yum repositories by editing the ceph.repo file in /etc/yum.repos.d from Hammer to Jewel in all of your Ceph nodes and clients:
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. As root, stop the monitor process on the first monitor you will be upgrading:
        # sudo /etc/init.d/ceph stop mon
  1. As root, once the process is stopped successfully, update the packages:
        # sudo yum update -y
  1. As root, update the owner and group permissions of the monitor directories to the Ceph user. With the Jewel release, Ceph processes are no longer managed by the root user, but instead by the Ceph user:
         # chown -R ceph:ceph /var/lib/ceph/mon
# chown -R ceph:ceph /var/log/ceph
# chown -R ceph:ceph /var/run/ceph
# chown ceph:ceph /etc/ceph/ceph.client.admin.keyring
# chown ceph:ceph /etc/ceph/ceph.conf
# chown ceph:ceph /etc/ceph/rbdmap
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. If using SELinux in enforcing or permissive mode, then you will need to relabel the SELinux context on the next reboot. If SELinux is disabled, this is not required:
        # touch /.autorelabel
  1. We then need to replay device events from the kernel as root:
        # udevadm trigger
  1. As root, we will need to enable the MON process:
        # systemctl enable ceph-mon.target
# systemctl enable ceph-mon@<hostname>
  1. Finally, reboot the monitor node to complete the upgrade:
        # shutdown -r now
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. Once the monitor node comes back online, validate the Ceph cluster status and review the newly installed Ceph version; then you can move on to the next monitor node, following the same upgrade steps:
         # ceph status
# ceph -v
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. You will notice a new HEALTH_WARN reporting crush map has legacy tunables (require bobtail, min is firefly). We will resolve this message after upgrading the OSD nodes.

  1. Once all monitor nodes have completed the upgrade, let's verify they are all running Jewel 10.2.9:
         # ceph tell mon.* version
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石

Upgrading the Ceph OSD nodes

升级 OSD 节点与升级监控节点的过程类似,应一次完成一个 OSD。概括地说,升级过程包括启用 Jewel 存储库、更新 OSD 目录的所有权以及重新启动 OSD。让我们查看以下步骤,了解 Ceph OSD 的详细升级过程:

  1. While upgrading the OSD nodes, some placement groups will enter a degraded state because one of the OSDs backing the PGs might be down or restarting. We will set two flags on the cluster prior to upgrading the OSD nodes, which will prevent the cluster from marking an OSD as out and triggering recovery. We will be upgrading a single OSD node at a time and moving to the next node once all PGs are active+clean:
        # ceph osd set noout
# ceph osd set norebalance
  1. Verify that the Ceph yum repositories have been updated by viewing the ceph.repo file in /etc/yum.repos.d. We updated this from Hammer to Jewel in step 1.
  2. Stop any running OSD processes on the first OSD node you are updating:
        # sudo /etc/init.d/ceph stop osd.<id>
  1. Once all the OSD processes on the node are stopped, as root update the packages:
        # yum update -y
  1. As root, update the owner and group permissions of the OSD directories to the Ceph user:
        # chown -R ceph:ceph /var/lib/ceph/osd
# chown -R ceph:ceph /var/log/ceph
# chown -R ceph:ceph /var/run/ceph
# chown -R ceph:ceph /etc/ceph
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. If using SELinux in enforcing or permissive mode, then you will need to relabel the SELinux context on the next reboot. If SELinux is disabled, this is not required:
        # touch /.autorelabel
  1. We then need to replay device events from the kernel as root:
        # udevadm trigger
  1. As root, we will need to enable the Ceph OSD process for each OSD:
         # systemctl enable ceph-osd.target
# systemctl enable ceph-osd@<id>
  1. Finally, reboot the OSD node to complete the upgrade:
        # shutdown -r now
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. Once the OSD node comes back online, validate the Ceph cluster status and review the newly installed Ceph version; then you can move on to the next OSD node, following the same upgrade steps:
         # ceph -s
# ceph tell osd.* version
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. Once you have validated that all PGs are in an active+clean state and all OSDs are up/in, we can remove the noout and norebalance flags from the cluster:
        # ceph osd unset noout
# ceph osd unset norebalance
  1. We will want to set the require_jewel_osds flag and the sortbitwise flag to verify that only OSDs running Jewel or higher can be added to the Ceph cluster, and enable a new method of handling bits. When the sortbitwise flag is set, the PGs will need to repeer but this should occur quickly with minimal impact:
        # ceph osd set require_jewel_osds
# ceph osd set sortbitwise
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石
  1. Finally, we can set out CRUSH tunables to optimal profile on the cluster. Setting this optimal profile may incur some data movement in the cluster. Also, verify that any clients have previously been upgraded to Jewel prior to enabling optimal tunables, and if using kernel clients that your client supports Jewel tunables (http://docs.ceph.com/docs/master/rados/operations/crush-map/#tunables):
       # ceph osd crush tunables optimal

  1. Verify that your cluster is in HEALTH_OK and all PGs are in an active+clean state:
读书笔记《ceph-cookbook-second-edition》将您的Cave集群从锤子升级为宝石

恭喜,您已将 Ceph 集群从 Hammer 升级到 Jewel!

Upgrading the Ceph Metadata Server

升级 Ceph MDS 时,您需要一次升级一个 MDS。

See also