基于 KubeSphere 的 K8s 生产实践之路-持久化存储之 GlusterFS
大家好,我是老Z。
本文接着上篇继续打造我们 Kubernetes 生产环境。
前提说明
Kubernetes 使用 GlusterFS 存储的方式
-
通过 Heketi 管理 GlusterFS,Kubernetes 调用 Heketi 的接口
-
GlusterFS 结合 NFS-Ganesha 提供 NFS 存储,Kubernetes 采用 NFS 的方式挂载
-
Kubernetes 挂载 GlusterFS 提供的数据卷到本地的存储目录,Kubernetes 采用 hostpatch 的方式
-
利用 KubeSphere 初始化安装集群的时候也可以直接使用 GlusterFS 作为持久化存储,具体操作可以参考安装 GlusterFS[1]
使用说明
-
本次选型使用的是 Heketi 的对接方案,使用比较广泛,网上的参考用例比较多,但是该方案也存在一定弊端,各位需要根据自己的情况选择
-
实现形式在底层创建了一堆的 lvs 卷(文中有效果展示),如果卷太多又太小的话,后期运维会比较麻烦,有一定的未知风险。
-
Heketi 官方已经停止更新了,项目处于维护状态,这就比较麻烦了,新入坑者慎入
-
NFS-Ganesha 的方案以前对接 ceph 的时候使用过,但是当时的稳定度不高,经常出现进程死掉的情况,对接 GlusterFS 没有测试过
-
作为本地卷挂载的方式,个人认为稳定度和性能上应该是最好的,但是由于是本地盘,使用场景受限,可根据使用需求酌情使用
GlusterFS 分布式文件系统介绍
简介
GlusterFS 系统是一个可扩展的网络文件系统,相比其他分布式文件系统,GlusterFS 具有高扩展性、高可用性、高性能、可横向扩展等特点,并且其没有元数据服务器的设计,让整个服务没有单点故障的隐患。
优点
1. 无元数据节点性能瓶颈
-
采用无中心对称式架构,没有专用的元数据服务器,也就不存在元数据服务器瓶颈。
-
元数据存在于文件的属性和扩展属性中。当需要访问某文件时,客户端使用 DHT 算法,根据文件的路径和文件名计算出文件所在 brick,然后由客户端从此 brick 获取数据,省去了同元数据服务器通信的过程。
2. 良好的可扩展性
-
使用弹性 hash 算法代替传统的有元数据节点服务,获得了接近线性的高扩展性。
3. 高可用
-
采用副本、EC 等冗余设计,保证在冗余范围内的节点掉线时,仍然可以从其它服务节点获取数据,保证高可用性。采用弱一致性的设计,当向副本中文件写入数据时,客户端计算出文件所在 brick,然后通过网络把数据传给所在 brick,当其中有一个成功返回,就认为数据成功写入,不必等待其它 brick 返回,就会避免当某个节点网络异常或磁盘损坏时因为一个 brick 没有成功写入而导致写操作等待。
-
服务器端还会随着存储池的启动,而开启一个 glustershd 进程,这个进程会定期检查副本和 EC 卷中各个 brick 之间数据的一致性,并恢复。
4. 存储池类型丰富
包括粗粒度、条带、副本、条带副本和 EC,可以根据用户的需求,满足不同程度的冗余。
-
粗粒度卷不带任何冗余,文件不进行切片,是完整的存放在某个 brick 上。
-
条带卷不带任何冗余,文件会切片存储(默认大小为 128kB)在不同的 brick 上。这些切片可以并发读写(并发粒度是条带块),可以明显提高读写性能。该模式一般只适合用于处理超大型文件和多节点性能要求高的情况。
-
副本卷冗余度高,副本数量可以灵活配置,可以保证数据的安全性。
-
条带副本卷是条带卷和副本卷的结合。
-
EC 卷使用 EC 校验算法,提供了低于副本卷的冗余度,冗余度小于 100%,满足比较低的数据安全性,例如可以使 2+1(冗余度为 50%)、5+3(冗余度为 60%)等。这个可以满足安全性要求不高的数据。
5. 高性能
-
采用弱一致性的设计,向副本中写数据时,只要有一个 brick 成功返回,就认为写入成功,不必等待其它 brick 返回,这样的方式比强一致性要快。
-
还提供了 I/O 并发、write-behind、read-ahead、io-cache、条带等提高读写性能的技术。并且这些都还可以根据实际需求进行开启/关闭,i/o 并发数量,cache 大小都可以调整。
Heketi 介绍
简介
Heketi 提供了 RESTful 管理接口,可用于管理 GlusterFS 卷的生命周期。能够在OpenStack[2],Kubernetes,Openshift 等云平台上实现动态存储资源供应(动态在 GlusterFS 集群内选择 bricks 构建 volume);支持 GlusterFS 多集群管理。 Heketi 的目标是提供一种在多个存储群集中创建,列出和删除 GlusterFS 卷的简单方法。 Heketi 将智能地管理群集中整个磁盘的分配,创建和删除。 在满足任何请求之前,Heketi 首先需要了解集群的拓扑(topologies )也就是需要配置 topologies.json 文件 。 此 json 文件将数据资源组织为以下内容:群集、节点、设备的归属、以及块的归属。
Heketi-cli 命令行工具向 Heketi 提供要管理的 GlusterFS 集群的信息。它通过变量 HEKETI_CLI_SERVER 来找自已的服务端。
总体架构展示
安装配置
系统基础配置
-
检查操作系统版本并进行内核升级
$ yum update
$ cat /etc/redhat-release
CentOS Linux release 7.9.2009 (Core)
-
修改主机名
-
节点一
$ hostnamectl set-hostname glusterfs-node-0
-
节点二
$ hostnamectl set-hostname glusterfs-node-1
-
节点三
$ hostnamectl set-hostname glusterfs-node-2
-
关闭防火墙及 selinux(所有节点执行)
# 关闭防火墙开机自启
$ systemctl disable firewalld
# 临时关闭防火墙
$ systemctl stop firewalld
# 永久关闭selinux
$ sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
# 临时关闭selinux
$ setenforce 0
安装配置 GLusterFS
-
配置 glusterfs 的 yum 源(所有节点执行)
$ vi /etc/yum.repos.d/glusterfs.repo
[glusterfs]
name=glusterfs
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/7.9.2009/storage/x86_64/gluster-9/
#gpgkey=https://mirrors.tuna.tsinghua.edu.cn/centos/RPM-GPG-KEY-CentOS-7
gpgcheck=0
ebabled=1
#清除yum缓存
$ yum clean all
#生成yum缓存
$ yum makecache
-
安装 glusterfs(三个节点执行)
$ yum install centos-release-gluster glusterfs-server glusterfs-client -y
-
启动 glusterfs 服务
$ systemctl start glusterd.service && systemctl enable glusterd.service
# 查看glusterfs服务状态
$ systemctl status glusterd.service
# 查看glusterfs服务端口
$ ss -lntp | grep glusterd
LISTEN 0 128 *:24007 *:* users:(("glusterd",pid=108776,fd=10))
配置 glusterfs 集群节点
$ gluster peer probe glusterfs-node-1
peer probe: success
$ gluster peer probe glusterfs-node-2
peer probe: success
# 查看集群节点信息
$ gluster peer status
Number of Peers: 2
Hostname: glusterfs-node-1
Uuid: 50a3da37-9f0f-49c7-aae4-e4a475beecc9
State: Peer in Cluster (Connected)
Hostname: glusterfs-node-2
Uuid: da245e40-52a7-4651-b44b-8222c237f192
State: Peer in Cluster (Connected)
安装配置 Heketi(任选一节点,这里选择节点一)
-
安装软件包
$ yum install heketi heketi-client -y
-
heketi 服务必要配置
# 生成密钥并配置免密登录
$ ssh-keygen -t rsa -q -f /etc/heketi/private_key -N ""
$ ssh-copy-id -i /etc/heketi/private_key.pub glusterfs-node-0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/private_key.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'glusterfs-node-0'"
and check to make sure that only the key(s) you wanted were added.
$ ssh-copy-id -i /etc/heketi/private_key.pub glusterfs-node-1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/private_key.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'glusterfs-node-1'"
and check to make sure that only the key(s) you wanted were added.
$ ssh-copy-id -i /etc/heketi/private_key.pub glusterfs-node-2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/etc/heketi/private_key.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'glusterfs-node-2'"
and check to make sure that only the key(s) you wanted were added.
#测试免密登录
$ ssh glusterfs-node-0
Last login: Wed Mar 2 09:12:30 2022
$ exit
$ ssh glusterfs-node-1
Last login: Wed Mar 2 09:12:30 2022
[root@glusterfs-node-1 ~]$ exit
$ ssh glusterfs-node-2
Last login: Wed Mar 2 09:12:30 2022
[root@glusterfs-node-2 ~]$ exit
#修改目录权限
$ chown heketi:heketi /etc/heketi/ -R
$ chown heketi:heketi /var/lib/heketi -R
$ chown heketi.heketi /etc/heketi/private_key
编写 heketi 配置文件
#修改heketi的配置文件
$ vi /etc/heketi/heketi.json
{
"_port_comment": "Heketi Server Port Number",
"port": "48080", #######heketi服务端口可以自定义
"_use_auth": "Enable JWT authorization. Please enable for deployment",
"use_auth": true, ########启动用户认证模式
"_jwt": "Private keys for access",
"jwt": {
"_admin": "Admin has access to all APIs",
"admin": {
"key": "admin@P@ssW0rd" ######heketi所有接口权限admin用户的密码
},
"_user": "User only has access to /volumes endpoint",
"user": {
"key": "user@P@ssW0rd" #####heketi仅仅对卷相关API有权限的user用户密码
}
},
"_glusterfs_comment": "GlusterFS Configuration",
"glusterfs": {
"_executor_comment": [
"Execute plugin. Possible choices: mock, ssh",
"mock: This setting is used for testing and development.",
" It will not send commands to any node.",
"ssh: This setting will notify Heketi to ssh to the nodes.",
" It will need the values in sshexec to be configured.",
"kubernetes: Communicate with GlusterFS containers over",
" Kubernetes exec api."
],
"executor": "ssh", #########配置heketi使用ssh认证模式
"_sshexec_comment": "SSH username and private key file information",
"sshexec": {
"keyfile": "/etc/heketi/private_key", #######heketi使用主机用户的认证密钥文件
"user": "root", ########heketi使用主机的用户root
"port": "22", ######主机的ssh服务端口
"fstab": "/etc/fstab"
},
"_kubeexec_comment": "Kubernetes configuration",
"kubeexec": {
"host" :"https://kubernetes.host:8443",
"cert" : "/path/to/crt.file",
"insecure": false,
"user": "kubernetes username",
"password": "password for kubernetes user",
"namespace": "OpenShift project or Kubernetes namespace",
"fstab": "Optional: Specify fstab file on node. Default is /etc/fstab"
},
"_db_comment": "Database file name",
"db": "/var/lib/heketi/heketi.db", #####heketi数据存储文件
"_loglevel_comment": [
"Set log level. Choices are:",
" none, critical, error, warning, info, debug",
"Default is warning"
],
"loglevel" : "debug"
}
}
启动服务
#启动heketi服务
$ systemctl enable heketi.service && systemctl start heketi.service
#查看服务状态
$ systemctl status heketi
● heketi.service - Heketi Server
Loaded: loaded (/usr/lib/systemd/system/heketi.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2022-03-02 12:02:59 IST; 14min ago
Main PID: 120229 (heketi)
CGroup: /system.slice/heketi.service
└─120229 /usr/bin/heketi --config=/etc/heketi/heketi.json
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: Main PID: 234479 (glusterd)
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: Tasks: 9
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: Memory: 6.0M
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: CGroup: /system.slice/glusterd.service
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: └─234479 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: Mar 02 13:52:19 glusterfs-node-2 systemd[1]: Starting GlusterFS, a clustered file-system server...
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: Mar 02 13:52:19 glusterfs-node-2 systemd[1]: Started GlusterFS, a clustered file-system server.
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: ]: Stderr []
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: [heketi] INFO 2022/03/02 12:17:01 Periodic health check status: node cf3ad692cc008e697eb68f2863843391 up=true
Mar 02 12:17:01 glusterfs-node-0 heketi[120229]: [heketi] INFO 2022/03/02 12:17:01 Cleaned 0 nodes from health cache
配置 topology
$ vi /etc/heketi/topology.json #配置三个节点的使用信息,这里/dev/db表示后端存储所要使用的数据盘
{
"clusters": [
{
"nodes": [
{
"node": {
"hostnames": {
"manage": [
"192.168.10.1"
],
"storage": [
"192.168.10.1"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.10.2"
],
"storage": [
"192.168.10.2"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
},
{
"node": {
"hostnames": {
"manage": [
"192.168.10.3"
],
"storage": [
"192.168.10.3"
]
},
"zone": 1
},
"devices": [
"/dev/sdb"
]
}
]
}
]
}
配置环境变量,便于后续管理
$ echo "export HEKETI_CLI_SERVER=http://192.168.10.1:48080" >> /etc/profile.d/heketi.sh
#192.168.10.1为heketi地址
#--user指定的用户为heketi配置文件中配置的admin用户--secret指定的密码为heketi配置文件中配置的admin密码
$ echo "alias heketi-cli='heketi-cli --server '$HEKETI_CLI_SERVER' --user admin --secret admin@P@ssW0rd'" >> ~/.bashrc
#加载环境变量
$ source /etc/profile.d/heketi.sh
$ source ~/.bashrc
根据 topology.json 来创建集群
#创建集群
$ heketi-cli topology load --json=/etc/heketi/topology.json
Found node 192.168.10.1 on cluster b89a07c2c6e8c533322591bf2a4aa613
Adding device /dev/sdb ... OK
Found node 192.168.10.2 on cluster b89a07c2c6e8c533322591bf2a4aa613
Adding device /dev/sdb ... OK
Found node 192.168.10.3 on cluster b89a07c2c6e8c533322591bf2a4aa613
Adding device /dev/sdb ... OK
#查看集群列表
$ heketi-cli cluster list
Clusters:
Id:b89a07c2c6e8c533322591bf2a4aa613 [file][block]
#查看集群详情,此命令用到的id为查看集群状态时返回的id
$ heketi-cli cluster info b89a07c2c6e8c533322591bf2a4aa613
Cluster id: b89a07c2c6e8c533322591bf2a4aa613
Nodes:
1218c773299856ebaef6f2461051640c
8bffed2d55f652a412c098ee31246559
cf3ad692cc008e697eb68f2863843391
Volumes:
Block: true
File: true
#查看集群节点
$ heketi-cli node list
Id:1218c773299856ebaef6f2461051640c Cluster:b89a07c2c6e8c533322591bf2a4aa613
Id:8bffed2d55f652a412c098ee31246559 Cluster:b89a07c2c6e8c533322591bf2a4aa613
Id:cf3ad692cc008e697eb68f2863843391 Cluster:b89a07c2c6e8c533322591bf2a4aa613
创建集群时报错解决办法
如果数据盘不是新盘曾经被使用过,创建集群时会报错,可按下面的方法处理
$ heketi-cli topology load --json=/etc/heketi/topology.json
Creating cluster ... ID: b89a07c2c6e8c533322591bf2a4aa613
Allowing file volumes on cluster.
Allowing block volumes on cluster.
Creating node 10.4.11.38 ... ID: 8bffed2d55f652a412c098ee31246559
Adding device /dev/sdb ... Unable to add device: Setup of device /dev/sdb failed (already initialized or contains data?): Can't initialize physical volume "/dev/sdb" of volume group "vg_423c25b95cf33cb94fa192aedd325b26" without -ff
/dev/sdb: physical volume not initialized.
Creating node 10.4.11.39 ... ID: 1218c773299856ebaef6f2461051640c
Adding device /dev/sdb ... Unable to add device: Setup of device /dev/sdb failed (already initialized or contains data?): Can't initialize physical volume "/dev/sdb" of volume group "vg_564e3fb62e748f1451f387deb469541e" without -ff
/dev/sdb: physical volume not initialized.
Creating node 10.4.11.40 ... ID: cf3ad692cc008e697eb68f2863843391
Adding device /dev/sdb ... Unable to add device: Setup of device /dev/sdb failed (already initialized or contains data?): Can't initialize physical volume "/dev/sdb" of volume group "vg_5bc298db6e8cc335d545c96eed7150f0" without -ff
#所有glusterfs节点执行
$ mkfs.xfs -f /dev/sdb
meta-data=/dev/sdb isize=512 agcount=4, agsize=183105408 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=732421632, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=357627, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
$ pvcreate -ff --metadatasize=128M --dataalignment=256K /dev/sdb
Wiping xfs signature on /dev/sdb.
Physical volume "/dev/sdb" successfully created.
验证测试
-
创建卷
$ heketi-cli volume create --size=2 --replica=3
Name: vol_aa8a1280b5133a36b32cf552ec9dd3f3
Size: 2
Volume Id: aa8a1280b5133a36b32cf552ec9dd3f3
Cluster Id: b89a07c2c6e8c533322591bf2a4aa613
Mount: 192.168.10.2:vol_aa8a1280b5133a36b32cf552ec9dd3f3
Mount Options: backup-volfile-servers=192.168.10.1,192.168.10.3
Block: false
Free Size: 0
Reserved Size: 0
Block Hosting Restriction: (none)
Block Volumes: []
Durability Type: replicate
Distribute Count: 1
Replica Count: 3
-
查看卷
#查看卷
$ heketi-cli volume list
Id:aa8a1280b5133a36b32cf552ec9dd3f3 Cluster:b89a07c2c6e8c533322591bf2a4aa613 Name:vol_aa8a1280b5133a36b32cf552ec9dd3f
#挂载测试,下列命192.168.10.2:vol_aa8a1280b5133a36b32cf552ec9dd3f3为上方创建卷时Mo的返回值,可以用heketi-cli volum info 卷id进行查看
$ mount -t gluster192.168.10.2:vol_aa8a1280b5133a36b32cf552ec9dd3f3 /mnt
#查看挂载详情
$ df -h
Filesystem Size Used Avail Use% Mounted on
.............................
192.168.10.2:vol_aa8a1280b5133a36b32cf552ec9dd3f3 2.0G 54M 2.0G 3% /mnt
-
节点详细信息查看
可以看出底层 vg 和 lv 的创建详情,了解底层的分配细节
##节点一
#查看节点创建出的pv
$ pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name vg_10a23554385cca32a01d7fc3373faf2d
PV Size <2.73 TiB / not usable 130.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 715223
Free PE 714705
Allocated PE 518
PV UUID mkDi5V-JHnR-xy2Y-5FcW-2JKg-K9dQ-JeXQCF
#查看节点创建出的vg
$ vgdisplay
--- Volume group ---
VG Name vg_10a23554385cca32a01d7fc3373faf2d
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 22
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <2.73 TiB
PE Size 4.00 MiB
Total PE 715223
Alloc PE / Size 518 / 2.02 GiB
Free PE / Size 714705 / <2.73 TiB
VG UUID BzMbsL-FiPz-oVZO-j6o1-Du67-N094-JIbdPB
#查看节点创建出的lv
$ lvdisplay
--- Logical volume ---
LV Name tp_8b932b033db44bf8d6b202ec3cd9b6f3
VG Name vg_10a23554385cca32a01d7fc3373faf2d
LV UUID hesO25-OdId-1eYo-OLVc-Phe8-3Y3L-z0RcsH
LV Write Access read/write (activated read only)
LV Creation host, time glusterfs-node-0, 2022-03-03 07:21:30 +0530
LV Pool metadata tp_8b932b033db44bf8d6b202ec3cd9b6f3_tmeta
LV Pool data tp_8b932b033db44bf8d6b202ec3cd9b6f3_tdata
LV Status available
# open 2
LV Size 2.00 GiB
Allocated pool data 0.70%
Allocated metadata 10.32%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/vg_10a23554385cca32a01d7fc3373faf2d/brick_8b932b033db44bf8d6b202ec3cd9b6f3
LV Name brick_8b932b033db44bf8d6b202ec3cd9b6f3
VG Name vg_10a23554385cca32a01d7fc3373faf2d
LV UUID VpkvpB-cmQo-jYgf-g83G-XA8C-1yxG-gAXksl
LV Write Access read/write
LV Creation host, time glusterfs-node-0, 2022-03-03 07:21:30 +0530
LV Pool name tp_8b932b033db44bf8d6b202ec3cd9b6f3
LV Status available
# open 1
LV Size 2.00 GiB
Mapped size 0.70%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
##节点二
$ pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name vg_c74562fedbcf8ea46923918cf3ac8958
PV Size <2.73 TiB / not usable 130.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 715223
Free PE 714705
Allocated PE 518
PV UUID cyZ6Ur-HIiQ-tREG-Ueb9-ZKBK-4Wkx-Pgc7FE
$ lvdisplay
--- Logical volume ---
LV Name tp_0524f0ec9f3ab36e11a06320c6a1afa2
VG Name vg_c74562fedbcf8ea46923918cf3ac8958
LV UUID jpG0fi-SvNf-Tufz-IEn8-VSBi-ChTf-t54IEN
LV Write Access read/write (activated read only)
LV Creation host, time glusterfs-node-1, 2022-03-03 07:20:45 +0530
LV Pool metadata tp_0524f0ec9f3ab36e11a06320c6a1afa2_tmeta
LV Pool data tp_0524f0ec9f3ab36e11a06320c6a1afa2_tdata
LV Status available
# open 2
LV Size 2.00 GiB
Allocated pool data 0.70%
Allocated metadata 10.32%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/vg_c74562fedbcf8ea46923918cf3ac8958/brick_7bb72d28b432248bcf9de23346d9ea3d
LV Name brick_7bb72d28b432248bcf9de23346d9ea3d
VG Name vg_c74562fedbcf8ea46923918cf3ac8958
LV UUID gsvc2l-JDHA-0L0e-oeNE-N0hY-p1Zd-aGAXo4
LV Write Access read/write
LV Creation host, time glusterfs-node-1, 2022-03-03 07:20:45 +0530
LV Pool name tp_0524f0ec9f3ab36e11a06320c6a1afa2
LV Status available
# open 1
LV Size 2.00 GiB
Mapped size 0.70%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
$ vgdisplay
--- Volume group ---
VG Name vg_c74562fedbcf8ea46923918cf3ac8958
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 22
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <2.73 TiB
PE Size 4.00 MiB
Total PE 715223
Alloc PE / Size 518 / 2.02 GiB
Free PE / Size 714705 / <2.73 TiB
VG UUID ImSA9l-xtcY-q1ol-ERmA-kDtc-xGs5-6MIgNW
##节点三
$ pvdisplay
--- Physical volume ---
PV Name /dev/sdb
VG Name vg_96fce8c076ca08f24c256e12e8cbbde0
PV Size <2.73 TiB / not usable 130.00 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 715223
Free PE 714705
Allocated PE 518
PV UUID cA2GW7-1EoO-w00U-8OKT-qZtF-J7xK-dRaqwd
$ vgdisplay
--- Volume group ---
VG Name vg_96fce8c076ca08f24c256e12e8cbbde0
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 22
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 1
Max PV 0
Cur PV 1
Act PV 1
VG Size <2.73 TiB
PE Size 4.00 MiB
Total PE 715223
Alloc PE / Size 518 / 2.02 GiB
Free PE / Size 714705 / <2.73 TiB
VG UUID PbEPjn-8zn6-wQjF-1Uvq-qaTv-aQ1J-BJFtZu
$ lvdisplay
--- Logical volume ---
LV Name tp_be0ace1a68127b0537199b654c521b2b
VG Name vg_96fce8c076ca08f24c256e12e8cbbde0
LV UUID nETAXU-HVYn-JZHq-afwh-2W74-YMZ4-m26xyw
LV Write Access read/write (activated read only)
LV Creation host, time glusterfs-node-2, 2022-03-03 09:51:14 +0800
LV Pool metadata tp_be0ace1a68127b0537199b654c521b2b_tmeta
LV Pool data tp_be0ace1a68127b0537199b654c521b2b_tdata
LV Status available
# open 2
LV Size 2.00 GiB
Allocated pool data 0.70%
Allocated metadata 10.32%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:6
--- Logical volume ---
LV Path /dev/vg_96fce8c076ca08f24c256e12e8cbbde0/brick_be0ace1a68127b0537199b654c521b2b
LV Name brick_be0ace1a68127b0537199b654c521b2b
VG Name vg_96fce8c076ca08f24c256e12e8cbbde0
LV UUID yXhTeX-4r7D-5rcE-rqKw-8NZb-z158-fchMLq
LV Write Access read/write
LV Creation host, time glusterfs-node-2, 2022-03-03 09:51:15 +0800
LV Pool name tp_be0ace1a68127b0537199b654c521b2b
LV Status available
# open 1
LV Size 2.00 GiB
Mapped size 0.70%
Current LE 512
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:8
-
删除卷
$ heketi-cli volume delete 951523a39e6d926764984279dbd7e4bc
K8s 集群的调用(两种方式)
命令行手动配置(方式一)
-
所有的 K8s 节点均安装 glusterfs 客户端,以保证正常挂载
$ yum install glusterfs-fuse -y
-
创建 secret,heketi 的认证密码,或者在 KubeSphere 平台添加密钥,key 的值,使用 base64 转码生成 echo -n "admin@P@ssW0rd" | base64
这里的用户名密码为 heketi 配置文件中创建的用户密码
$ vi heketi_secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: heketi-secret
namespace: kube-system
data:
key: YWRtaW5AUEBzc1cwcmQ=
type: kubernetes.io/glusterfs
$ kubectl apply -f heketi_secret.yaml
-
创建 storageclass
$ vi heketi-storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: glusterfs
namespace: kube-system
parameters:
resturl: "http://192.168.10.1:48080" #heketi的地址
clusterid: "b89a07c2c6e8c533322591bf2a4aa613" #在heketi节点使用heketi-cli cluster list命令返回的集群id
restauthenabled: "true"
restuser: "admin" #heketi配置文件中创建的用户密
secretName: "heketi-secret" #与secret资源中定义一致
secretNamespace: "kube-system" #与secret资源中定义一致
volumetype: "replicate:3"
provisioner: kubernetes.io/glusterfs
reclaimPolicy: Delete
$ kubectl apply -f heketi-storageclass.yaml
-
创建 pvc 测试
$ vi heketi-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: heketi-pvc
annotations:
volume.beta.kubernetes.io/storage-provisioner: kubernetes.io/glusterfs
spec:
storageClassName: "glusterfs"
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
$ kubectl apply -f heketi-pvc.yaml
-
查看 sc 和 pvc 的信息
$ kubectl get sc
$ kubectl get pvc
-
创建 Pod 挂载 pvc
$ vi heketi-pod.yaml
kind: Pod
apiVersion: v1
metadata:
name: heketi-pod
spec:
containers:
- name: heketi-container
image: busybox
command:
- sleep
- "3600"
volumeMounts:
- name: heketi-volume
mountPath: "/pv-data"
readOnly: false
volumes:
- name: heketi-volume
persistentVolumeClaim:
claimName: heketi-pvc
$ kubectl apply -f heketi-pod.yaml
-
查看 POD 状态,若状态为 runing,则挂载成功
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
heketi-pod 1/1 Running 0 4s
#清理测试pod
$ kubectl delete -f heketi-pod.yaml
KubeSphere 界面配置(方式二)
-
创建密钥
-
平台管理->配置中心->密钥->创建
-
名称:heketi-secret
-
项目:kube-system
-
类型选择默认
-
键:key
-
值:YWRtaW5AUEBzc1cwcmQ=
-
创建存储类型
-
存储名称:glusterfs
-
存储类型:glusterfs
-
创建存储类型的详细信息
-
允许存储卷扩容:是
-
支持的访问模式:ReadWriteOnce、ReadOnlyMany、ReadWriteMany
-
存储系统:kubernetes.io/glusterfs
-
-
clusterid:b89a07c2c6e8c533322591bf2a4aa613(heketi-cli cluster list 命令返回的集群 id )
-
restauthenabled:true
-
restuser:admin (heketi 服务的 admin 用户)
-
secretNamespace:kube-system
-
secretName:heketi-secret (上方创建的 secert 名称)
-
volumetype:replicate:3
-
查看存储类型是否创建成功
-
创建存储卷进行测试
-
名称:test (可自定义)
-
项目:kube-system (可自定义)
-
存储类型选择 glusterfs
-
状态为准备就绪即为成功
参考文档
-
GlusterFS[3]
-
https://github.com/heketi/heketi
下篇文章将会介绍 SpingCloud 架构依赖的中间件的安装部署,敬请期待。
引用链接
安装 GlusterFS: https://kubesphere.io/zh/docs/installing-on-linux/persistent-storage-configurations/install-glusterfs/
[2]OpenStack: https://so.csdn.net/so/search?q=OpenStack&spm=1001.2101.3001.7020
[3]GlusterFS: https://www.gluster.org