vlambda博客
学习文章列表

数据库系列之MongoDB集群环境部署

MongoDB集群环境有3种模式:主从复制、副本集和分片,本文主要介绍每种模式的集群环境部署和配置,并进行测试验证。


1、MongoDB基本概念

1.1 MongoDB特性

MongoDB是一个可扩展的高性能、开源、模式自由,面向文档的数据库。它使用C++编写,主要包含以下特点:

  • 面向集合的存储:适合存储对象及JSON形式的数据

  • 动态查询:Mongo支持丰富的查询方式,查询指令使用JSON形式的标记,可轻易查询文档中内嵌的对象及数组

  • 完整的索引支持:包括文档内嵌对象及数组。Mongo 的查询优化器会分析查询表达式,并生成一个高效的查询计划

  • 查询监视:Mongo包含一个监控工具用于分析数据库操作性能

  • 复制及自动故障转移:Mongo 数据库支持服务器之间的数据复制,支持主-从模式及服务器之间的相互复制。复制的主要目的是提供冗余及自动故障转移

  • 高效的传统存储方式:支持二进制数据及大型对象(如照片或图片)

  • 自动分片以支持云级别的伸缩性:自动分片功能支持水平的数据库集群,可动态添加额外的机器

1.2 MongoDB集群架构

MongoDB集群环境有3中模式:主从复制、副本集和分片。

1)主从复制

主从复制模式,可用于备份、故障恢复和读扩展,最简单的设置是建立一个主节点和一个或多个从节点,结构图如下:

配置主从复制需要注意以下几点:

  • 在数据库集群中需明确知道谁是主服务器,主服务器只有一台

  • 从服务器要知道自己的数据源,也就是对应的主服务是谁

  • -master用来确定主服务器,-slave和-source来控制从服务器

2)副本集

副本集有点类似主从复制,不过跟真正的主从复制还是有两点区别的:

  • 该集群没有特定的主数据库

  • 如果哪个主数据库宕机了,集群中就会推选出一个从属数据库作为主数据库顶上,这就具备了自动故障恢复功能

结构图如下所示:

数据库系列之MongoDB集群环境部署

  • 第一张图表明A是活跃的B和C是用于备份的

  • 第二张图当A出现了故障,这时候集群根据权重算法推选出B为活跃的数据库

  • 第三张图当A恢复后他自动又会变为备份数据库

3)分片Sharding

分片技术,跟关系数据库的表分区类似,当数据量达到T级别的时候,服务器的磁盘和内存压力比较大,或者单个MongoDB服务器已经不能满足大量的插入操作,来针对这样的场景可以使用MongoDB提供的分片技术。当然分片除了解决空间不足的问题之外,还可以极大的提升的查询速度。

MongoDB采用将集合进行拆分,然后将拆分的数据均摊到几个片上, 结构如下图:

数据库系列之MongoDB集群环境部署

从图中可以看到有四个组件:mongos、configure server、shard、replica set:

  • mongos:数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。

  • config server:配置服务器,存储所有数据库元信息(路由、分片)的配置。由于mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从config server加载配置信息,以后如果配置服务器信息变化会通知到所有的mongos更新自己的状态,这样mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据。当config servers replica sets的主服务丢失且无法选出一个primary,cluster的数据会变成只读。Shards中的数据还可以读写,但是不能出现chunk migration和chunk splits。如果所有config servers不可访问,cluster则不可操作。

  • shard:分片,保存具体的存储信息,分片的设置可以减少数据集中在一台机器的压力,减少硬盘的读写、网络的IO、CPU和内存的瓶颈。在MongoDB集群中只要设置好了分片规则,通过 mongos操作数据库就能自动把对应的数据操作请求转发到对应的分片机器上。在生产环境中分片的片键规则会影响到数据的均衡分布。

  • replica set:分片如果没有replica set是个不完整架构,假设其中的一个分片挂掉这部分数据就丢失了,所以在高可用性的分片架构还需要对于每一个分片构建replica set副本集以保证分片的可靠性。生产环境通常是2个副本+1个仲裁。

2、主从复制

2.1 配置主从服务器

1)解压安装文件到指定目录

[root@tango-centos01 src-install]# tar -xzvf mongodb-linux-x86_64-rhel70-3.6.3.tgz -C /usr/local/mongodb

2)创建数据文件路径

将数据文件和日志文件放在data目录下,并创建数据文件目录和日志文件目录

[root@tango-centos01 local]# cd mongodb
[root@tango-centos01 mongodb]# ls
mongodb-linux-x86_64-rhel70-3.6.3
[root@tango-centos01 mongodb]# mkdir data
[root@tango-centos01 mongodb]# mkdir logs
[root@tango-centos01 mongodb]# ls
data logs mongodb-linux-x86_64-rhel70-3.6.3

3)配置Master服务器

[root@tango-centos01 config]# vi master.conf
#MongoDB config - 2018.05.15

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

#config port
port = 27017
bind_ip = 192.168.112.101

#config fork
fork = true

#master server
master = true

4)从主服务器拷贝介质到从服务器

[root@tango-centos01 local]# scp -r mongodb 192.168.112.102:/usr/local/
[root@tango-centos01 local]# scp -r mongodb 192.168.112.103:/usr/local/

5)配置Slave服务器

  • 节点2

[root@tango-centos02 config]# vi slave.conf
#MongoDB config - 2018.05.15

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

#config port
port = 27017
bind_ip = 192.168.112.102

#config fork
fork = true

#master server
source = 192.168.112.101:27017
master = false
  • 节点3

[root@tango-centos03 config]# vi slave.conf
#MongoDB config - 2018.05.15

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

#config port
port = 27017
bind_ip = 192.168.112.103

#config fork
fork = true

#master server
source = 192.168.112.101:27017
master = false
2.2 启动集群环境

1)启动主服务器和从服务器MongoDB进程

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]#./bin/mongod -f ./config/master.conf
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongod -f ./config/slave.conf
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongod -f ./config/slave.conf

2)查看进程

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1217 1 1 14:19 ? 00:00:02 ./bin/mongod -f ./config/master.conf
root 1248 1198 0 14:22 pts/0 00:00:00 grep --color=auto mongo
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1096 1 1 14:20 ? 00:00:01 ./bin/mongod -f ./config/slave.conf
root 1121 1067 0 14:22 pts/0 00:00:00 grep --color=auto mongo
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1097 1 2 14:21 ? 00:00:01 ./bin/mongod -f ./config/slave.conf
root 1122 1064 0 14:23 pts/0 00:00:00 grep --color=auto mongo

3)查看端口

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.101:27017 0.0.0.0:* LISTEN 1217/./bin/mongod
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.102:27017 0.0.0.0:* LISTEN 1096/./bin/mongod
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.103:27017 0.0.0.0:* LISTEN 1097/./bin/mongod

4)登陆主服务器的MongoDB客户端

[[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongo 192.168.112.101:27017
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.112.101:27017/test
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.

> use testuse test
switched to db test
>

5)登陆从服务器的MongoDB客户端

[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongo 192.168.112.102:27017
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.112.102:27017/test
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.

> use testuse test
switched to db test
>
2.3 集群环境验证

1)插入记录

> use master_slaveuse master_slave
switched to db master_slave
> function add(){function add(){
... var i=0;var i=0;
... for(;i<50;i++) {for(;i<50;i++) {
... db.person.insert({"name":"wang"+i}) db.person.insert({"name":"wang"+i})
... }}
... }}
> add()add()
> db.person.find()db.person.find()
{ "_id" : ObjectId("5ad835c8eecf420e07ad281e"), "name" : "wang0" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad281f"), "name" : "wang1" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2820"), "name" : "wang2" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2821"), "name" : "wang3" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2822"), "name" : "wang4" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2823"), "name" : "wang5" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2824"), "name" : "wang6" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2825"), "name" : "wang7" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2826"), "name" : "wang8" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2827"), "name" : "wang9" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2828"), "name" : "wang10" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2829"), "name" : "wang11" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282a"), "name" : "wang12" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282b"), "name" : "wang13" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282c"), "name" : "wang14" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282d"), "name" : "wang15" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282e"), "name" : "wang16" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282f"), "name" : "wang17" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2830"), "name" : "wang18" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2831"), "name" : "wang19" }
Type
"it" for more
>

2)在从服务器上查询表,提示错误信息:

> db.person.find()db.person.find()
Error: error: {
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk"
}
>

3)因为从节点默认不允许读写的,在从节点执行rs.slaveOk()可解决

> rs.slaveOk();;;
> db.person.find()db.person.find()
{ "_id" : ObjectId("5ad835c8eecf420e07ad281e"), "name" : "wang0" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad281f"), "name" : "wang1" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2820"), "name" : "wang2" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2821"), "name" : "wang3" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2822"), "name" : "wang4" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2823"), "name" : "wang5" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2824"), "name" : "wang6" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2825"), "name" : "wang7" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2826"), "name" : "wang8" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2827"), "name" : "wang9" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2828"), "name" : "wang10" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2829"), "name" : "wang11" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282a"), "name" : "wang12" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282b"), "name" : "wang13" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282c"), "name" : "wang14" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282d"), "name" : "wang15" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282e"), "name" : "wang16" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad282f"), "name" : "wang17" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2830"), "name" : "wang18" }
{ "_id" : ObjectId("5ad835c8eecf420e07ad2831"), "name" : "wang19" }
Type "it" for more
2.4 主从复制缺陷思考
  1. 主节点挂了能否自动切换连接?目前需要手工切换。

  2. 主节点的写压力过大如何解决?

  3. 从节点每个上面的数据都是对数据库全量拷贝,从节点压力会不会过大?

  4. 就算对从节点路由实施路由访问策略能否做到自动扩展?

3、副本集

副本集是mongod实例的集合,它们维护相同的数据。Replica sets中包含一些data nodes,有些也会包含一个artiber node。在data nodes中,有且只有一个node会被选择为主节点,其它节点为从节点。

  • 主节点接受所有写操作,replica sets中只有一个主节点设置为 { w: "majority" }

  • 主节点会记录所有对datasets的操作到operation log中

  • 从节点会复制主节点的oplog并将这些操作应用到datasets上,以和主节点数据同步

  • 如果主节点不可用,从节点会选择自己为主节点

下图为一个主节点加上2个从节点:

数据库系列之MongoDB集群环境部署

也可以增加额外的mongod实例作为artiber,artiber不会保存任何数据,只作为心跳的回应和主节点选举的裁判。

数据库系列之MongoDB集群环境部署

3.1 配置服务器

1)配置Master服务器

#MongoDB config - 2018.05.15
#Replication Set Configuration

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

directoryperdb = true
logappend = true

replSet = rs-tango-01

#config port
port = 27017
bind_ip = 192.168.112.101

#config fork
fork = true

oplogSize = 2000
noprealloc = true

2)配置slave服务器1机

#MongoDB config - 2018.05.15
#Replication Set Configuration

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

#
directoryperdb = true
logappend = true

#
replSet = rs-tango-01

#config port
port = 27017
bind_ip = 192.168.112.102

#config fork
fork = true

#
oplogSize = 2000
noprealloc = true

3)配置slave服务器2机

#MongoDB config - 2018.05.15
#Replication Set Configuration

#config database path
dbpath = /usr/local/mongodb/data

#config log path
logpath = /usr/local/mongodb/logs/mongodb.log

#
directoryperdb = true
logappend = true

#
replSet = rs-tango-01

#config port
port = 27017
bind_ip = 192.168.112.103

#config fork
fork = true

#
oplogSize = 2000
noprealloc = true

4)参数解释

  • dbpath:数据存放目录

  • logpath:日志存放路径

  • pidfilepath:进程文件,方便停止mongodb

  • directoryperdb:为每一个数据库按照数据库名建立文件夹存放

  • logappend:以追加的方式记录日志

  • replSet:replica set的名字

  • port:mongodb进程所使用的端口号,默认为27017

  • oplogSize:mongodb操作日志文件的最大大小。单位为Mb,默认为硬盘剩余空间的5%

  • fork:以后台方式运行进程

  • noprealloc:不预先分配存储

5)节点和初始化高级参数

  • standard 常规节点:参与投票有可能成为活跃节点

  • passive 副本节点:参与投票,但是不能成为活跃节点

  • arbiter 仲裁节点:只是参与投票不复制节点也不能成为活跃节点

  • Priority:0到1000之间 ,0代表是副本节点 ,1到1000是常规节点

  • arbiterOnly:true 仲裁节点

3.2 启动MongoDB

1)在3台服务器分别启动MongoDB

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]#./bin/mongod -f ./config/master.conf
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]#./bin/mongod -f ./config/slave.conf
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]#./bin/mongod -f ./config/slave.conf

2)查看端口

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.101:27017 0.0.0.0:* LISTEN 1217/./bin/mongod
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.102:27017 0.0.0.0:* LISTEN 1096/./bin/mongod
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -tlnp | grep mongod
tcp 0 0 192.168.112.103:27017 0.0.0.0:* LISTEN 1097/./bin/mongod
3.3 配置主备节点

1)选择3台节点中的任意一台连接到MongoDB,主备节点的配置需要使用到admin database,具体如下:

[[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongo 192.168.112.101:27017
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.112.101:27017/test
MongoDB server version: 3.6.3
Welcome to the MongoDB shell.

> use admin
switched to db admin
> cfg={_id:"rs-tango-01",members:[{_id:0,host:'192.168.112.101:27017'},{_id:1,host:'192.168.112.102:27017'},{_id:2,host:'192.168.112.103:27017'}]}
{
"_id" : "rs-tango-01",
"members" : [
{
"_id" : 0,
"host" : "192.168.112.101:27017"
},
{
"_id" : 1,
"host" : "192.168.112.102:27017"
},
{
"_id" : 2,
"host" : "192.168.112.103:27017"
}
]
}

2)初始化副本集的配置

> rs.initiate(cfg)rs.initiate(cfg)
{
"ok" : 1,
"operationTime" : Timestamp(1526623743, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1526623743, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}

3)查看副本集状态

rs-tango-01:SECONDARY> rs.status()rs.status()
{
"set" : "rs-tango-01",
"date" : ISODate("2018-05-18T06:09:31.892Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.112.101:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 747,
"optime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-18T06:09:20Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1526623754, 1),
"electionDate" : ISODate("2018-05-18T06:09:14Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.112.102:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 28,
"optime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-18T06:09:20Z"),
"optimeDurableDate" : ISODate("2018-05-18T06:09:20Z"),
"lastHeartbeat" : ISODate("2018-05-18T06:09:30.420Z"),
"lastHeartbeatRecv" : ISODate("2018-05-18T06:09:31.335Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.112.101:27017",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.112.103:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 28,
"optime" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1526623760, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-18T06:09:20Z"),
"optimeDurableDate" : ISODate("2018-05-18T06:09:20Z"),
"lastHeartbeat" : ISODate("2018-05-18T06:09:30.421Z"),
"lastHeartbeatRecv" : ISODate("2018-05-18T06:09:31.331Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.112.101:27017",
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1526623760, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1526623760, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs-tango-01:PRIMARY>

可以看到,192.168.112.101这台服务器配置为主节点,其它2台服务器配置为备节点,并且通过mongo shell登录可以看到前缀名称主节点的变成rs-tango-01:PRIMARY、备节点变成rs-tango-01:SECONDARY。

3.4 主备环境测试验证
场景一:主节点写数据,查看从节点是否能看到数据

1)主节点写入数据

rs-tango-01:PRIMARY> use rs_testuse rs_test
switched to db rs_test
rs-tango-01:PRIMARY> function add(){function add(){
... var i=0;var i=0;
... for(;i<20;i++){for(;i<20;i++){
... db.person.insert({"name":"tang"+i})db.person.insert({"name":"tang"+i})
... }}
... }}
rs-tango-01:PRIMARY> add()add()
rs-tango-01:PRIMARY> db.person.find()db.person.find()
{ "_id" : ObjectId("5afe70379a8bdfec900dfd1f"), "name" : "tang0" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd20"), "name" : "tang1" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd21"), "name" : "tang2" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd22"), "name" : "tang3" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd23"), "name" : "tang4" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd24"), "name" : "tang5" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd25"), "name" : "tang6" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd26"), "name" : "tang7" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd27"), "name" : "tang8" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd28"), "name" : "tang9" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd29"), "name" : "tang10" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2a"), "name" : "tang11" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2b"), "name" : "tang12" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2c"), "name" : "tang13" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2d"), "name" : "tang14" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2e"), "name" : "tang15" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2f"), "name" : "tang16" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd30"), "name" : "tang17" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd31"), "name" : "tang18" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd32"), "name" : "tang19" }
rs-tango-01:PRIMARY> show collectionsshow collections
person

2)默认情况下从节点是不可读的,报错如下:

rs-tango-01:SECONDARY> shwshow collectionsshow collections
2018-05-18T14:20:50.512+0800 E QUERY [thread1] Error: listCollections failed: {
"operationTime" : Timestamp(1526624436, 1),
"ok" : 0,
"errmsg" : "not master and slaveOk=false",
"code" : 13435,
"codeName" : "NotMasterNoSlaveOk",
"$clusterTime" : {
"clusterTime" : Timestamp(1526624436, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
DB.prototype._getCollectionInfosCommand@src/mongo/shell/db.js:941:1
DB.prototype.getCollectionInfos@src/mongo/shell/db.js:953:19
DB.prototype.getCollectionNames@src/mongo/shell/db.js:964:16
shellHelper.show@src/mongo/shell/utils.js:809:9
shellHelper@src/mongo/shell/utils.js:706:15
@(shellhelp2):1:1
rs-tango-01:SECONDARY>

3)因为从节点默认不允许读写的,在==从节点执行rs.slaveOk()==可解决

rs-tango-01:SECONDARY> rs.slaveOk();rs.slaveOk();
rs-tango-01:SECONDARY> show collectionsshow collections
person
rs-tango-01:SECONDARY> db.person.find()db.person.find()
{ "_id" : ObjectId("5afe70379a8bdfec900dfd1f"), "name" : "tang0" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd21"), "name" : "tang2" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd20"), "name" : "tang1" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd22"), "name" : "tang3" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd23"), "name" : "tang4" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd24"), "name" : "tang5" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd25"), "name" : "tang6" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd26"), "name" : "tang7" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd27"), "name" : "tang8" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd28"), "name" : "tang9" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd29"), "name" : "tang10" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2a"), "name" : "tang11" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2b"), "name" : "tang12" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2c"), "name" : "tang13" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2e"), "name" : "tang15" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2d"), "name" : "tang14" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd2f"), "name" : "tang16" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd30"), "name" : "tang17" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd31"), "name" : "tang18" }
{ "_id" : ObjectId("5afe70389a8bdfec900dfd32"), "name" : "tang19" }
场景二:主节点异常,查看集群状态

1)强制kill掉主节点MongoDB进程

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1235 1 0 13:57 ? 00:00:16 /usr/local/mongodb/mongodb-linux-x86_64-rhel70-3.6.3/bin/mongod --config /usr/local/mongodb/mongodb-linux-x86_64-rhel70-3.6.3/config/master.conf
root 1479 1209 0 14:33 pts/0 00:00:00 grep --color=auto mongo
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# kill -9 1235
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1487 1209 0 14:33 pts/0 00:00:00 grep --color=auto mongo

2)查看MongoDB服务状态,备节点192.168.112.103已经变成主节点

rs-tango-01:SECONDARY> rs.status()rs.status()
{
"set" : "rs-tango-01",
"date" : ISODate("2018-05-18T06:34:43.512Z"),
"myState" : 2,
"term" : NumberLong(2),
"syncingTo" : "192.168.112.103:27017",
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.112.101:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2018-05-18T06:34:42.373Z"),
"lastHeartbeatRecv" : ISODate("2018-05-18T06:33:49.341Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "192.168.112.102:27017",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 1579,
"optime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-05-18T06:34:42Z"),
"syncingTo" : "192.168.112.103:27017",
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "192.168.112.103:27017",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1531,
"optime" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1526625282, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2018-05-18T06:34:42Z"),
"optimeDurableDate" : ISODate("2018-05-18T06:34:42Z"),
"lastHeartbeat" : ISODate("2018-05-18T06:34:42.357Z"),
"lastHeartbeatRecv" : ISODate("2018-05-18T06:34:41.908Z"),
"pingMs" : NumberLong(0),
"electionTime" : Timestamp(1526625240, 1),
"electionDate" : ISODate("2018-05-18T06:34:00Z"),
"configVersion" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1526625282, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1526625282, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
rs-tango-01:SECONDARY>

3)重新启动主节点的MongoDB服务,通过log可以看到已经变成备节点

2018-05-18T14:39:25.030+0800 I REPL     [replexec-10] Member 192.168.112.101:27017 is now in state SECONDARY

通过rs.status()也可以看到服务器的状态。

4、分片

4.1 分片集群环境规划

Shard服务器使用Replica Sets确保每个数据节点都具有备份、自动容错转移、自动恢复的能力,其中:

  • 配置服务器:使用3台服务器配置replica sets架构确保元数据完整性

  • 路由进程:使用3个路由进程实现平衡,提高客户端接入性能

  • 3个分片进程:Shard11、Shard12和Shard13 组成一个副本集(replication set和arbiter),提供Sharding中shard1的功能

  • 3个分片进程:Shard21、Shard22和Shard23 组成一个副本集(replication set和arbiter),提供Sharding中Shard2的功能

首先确定各个组件的数量:mongos 3个,config server replica set 1个3台服务器,数据分3片shard server 2个,每个shard有一个副本一个仲裁也就是2*3=6个,总共需要部署12个实例。这些实例可以部署在独立机器也可以部署在一台机器,我们这里测试资源有限,只准备了 3台机器,在同一台机器只要端口不同就可以,(注:所有的仲裁节点放在一台机器,其余两台机器承担了全部读写操作,但是作为仲裁的192.168.112.103相当空闲。因此实际生产环境的部署架构可以这样调整,把机器的负载分的更加均衡一点,每个机器既可以作为主节点、副本节点、仲裁节点)具体物理部署图如下:

数据库系列之MongoDB集群环境部署

服务器具体环境配置如下表所示:

主机名 ip 服务进程 端口号
tango-centos01 192.168.112.101 Mongos 10041
Config-Server-RS 10031
Shard11-RS 10011
Shard21-RS 10021
tango-centos02 192.168.112.102 Mongos 10042
Config-Server-RS 10032
Shard12 10012
Shard22 10022
tango-centos03 192.168.112.103 Mongos 10043
Config-Server-RS 10033
Shard13-artiber 10013
Shard23-artiber 10023
4.2 服务器环境配置
4.2.1 配置Shard和Replica Sets

1)配置shard11.conf

dbpath = /usr/local/mongodb/data/shard11
logpath = /usr/local/mongodb/logs/shard11.log
pidfilepath = /usr/local/mongodb/pid/shard11.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10011
bind_ip = 192.168.112.101
fork = true
shardsvr = true
journal = true

2)配置shard12.conf

dbpath = /usr/local/mongodb/data/shard12
logpath = /usr/local/mongodb/logs/shard12.log
pidfilepath = /usr/local/mongodb/pid/shard12.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10012
bind_ip = 192.168.112.102
fork = true
shardsvr = true
journal = true

3)配置shard13.conf

dbpath = /usr/local/mongodb/data/shard13
logpath = /usr/local/mongodb/logs/shard13.log
pidfilepath = /usr/local/mongodb/pid/shard13.pid
directoryperdb = true
logappend = true
replSet = shard-rs-01
port = 10013
bind_ip = 192.168.112.103
fork = true
shardsvr = true
journal = true

4)配置shard21.conf

dbpath = /usr/local/mongodb/data/shard21
logpath = /usr/local/mongodb/logs/shard21.log
pidfilepath = /usr/local/mongodb/pid/shard21.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
port = 10021
bind_ip = 192.168.112.101
fork = true
shardsvr = true
journal = true

5)配置shard22.conf

dbpath = /usr/local/mongodb/data/shard22
logpath = /usr/local/mongodb/logs/shard22.log
pidfilepath = /usr/local/mongodb/pid/shard22.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
port = 10022
bind_ip = 192.168.112.102
fork = true
shardsvr = true
journal = true

6)配置shard23.conf

dbpath = /usr/local/mongodb/data/shard23
logpath = /usr/local/mongodb/logs/shard23.log
pidfilepath = /usr/local/mongodb/pid/shard23.pid
directoryperdb = true
logappend = true
replSet = shard-rs-02
bind_ip = 192.168.112.103
port = 10023
fork = true
shardsvr = true
journal = true

7)配置config1.conf

dbpath = /usr/local/mongodb/data/config1
logpath = /usr/local/mongodb/logs/config1.log
pidfilepath = /usr/local/mongodb/pid/config1.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10031
bind_ip = 192.168.112.101
fork = true
configsvr = true
journal = true

8)配置config2.conf

dbpath = /usr/local/mongodb/data/config2
logpath = /usr/local/mongodb/logs/config2.log
pidfilepath = /usr/local/mongodb/pid/config2.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10032
bind_ip = 192.168.112.102
fork = true
configsvr = true
journal = true

9)配置config3.conf

dbpath = /usr/local/mongodb/data/config3
logpath = /usr/local/mongodb/logs/config3.log
pidfilepath = /usr/local/mongodb/pid/config3.pid
directoryperdb = true
logappend = true
replSet = config-server-rs
port = 10033
bind_ip = 192.168.112.103
fork = true
configsvr = true
journal = true

10)配置route1.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route1.log
pidfilepath = /usr/local/mongodb/pid/ route1.pid
logappend = true
port = 10041
bind_ip = 192.168.112.101
fork = true

11)配置route2.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route2.log
pidfilepath = /usr/local/mongodb/pid/ route2.pid
logappend = true
port = 10042
bind_ip = 192.168.112.102
fork = true

12)配置route3.conf

configdb = config-server-rs/192.168.112.101:10031,192.168.112.102:10032,192.168.112.103:10033
logpath = /usr/local/mongodb/logs/route3.log
pidfilepath = /usr/local/mongodb/pid/ route3.pid
logappend = true
port = 10043
bind_ip = 192.168.112.103
fork = true
4.2.2 创建目录文件
  • 节点1

[root@tango-centos01 data]# mkdir shard11
[root@tango-centos01 data]# mkdir shard21
[root@tango-centos01 data]# mkdir config1
[root@tango-centos01 mongodb]# mkdir pid
  • 节点2

[root@tango-centos02 data]# mkdir shard12
[root@tango-centos02 data]# mkdir shard22
[root@tango-centos02 data]# mkdir config2
[root@tango-centos02 mongodb]# mkdir pid
  • 节点3

[root@tango-centos03 data]# mkdir shard13
[root@tango-centos03 data]# mkdir shard23
[root@tango-centos03 data]# mkdir config3
[root@tango-centos03 mongodb]# mkdir pid

4.2.3 参数说明

  • dbpath:数据存放目录

  • logpath:日志存放路径

  • pidfilepath:进程文件,方便停止mongodb

  • directoryperdb:为每一个数据库按照数据库名建立文件夹存放

  • logappend:以追加的方式记录日志

  • replSet:replica set的名字

  • port:mongodb进程所使用的端口号,默认为27017

  • oplogSize:mongodb操作日志文件的最大大小。单位为Mb,默认为硬盘剩余空间的5%

  • fork:以后台方式运行进程

  • noprealloc:不预先分配存储

  • journal:写日志,为了快速启动并节约测试环境存储空间,可以加上nojournal是为了关闭日志信息,在我们的测试环境不需要初始化这么大的redo日志

  • smallfiles:当提示空间不够时添加此参数

  • shardsvr:分片

  • configsvr:配置服务节点

  • configdb:配置config节点到route节点

4.3 启动集群环境

1)以下命令启动MongoDB环境

[root@tango-centos01]# ./bin/mongod -f ./config/shard11.conf
[root@tango-centos02]# ./bin/mongod -f ./config/shard12.conf
[root@tango-centos03]# ./bin/mongod -f ./config/shard13.conf
[root@tango-centos01]# ./bin/mongod -f ./config/shard21.conf
[root@tango-centos02]# ./bin/mongod -f ./config/shard22.conf
[root@tango-centos03]# ./bin/mongod -f ./config/shard23.conf
[root@tango-centos01]# ./bin/mongod -f ./config/config1.conf
[root@tango-centos02]# ./bin/mongod -f ./config/config2.conf
[root@tango-centos03]# ./bin/mongod -f ./config/config3.conf

2)查看服务和端口状态

  • 节点1

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1225 1 1 11:14 ? 00:00:02 ./bin/mongod -f ./config/shard11.conf
root 1257 1 2 11:14 ? 00:00:01 ./bin/mongod -f ./config/shard21.conf
root 1288 1 5 11:15 ? 00:00:01 ./bin/mongod -f ./config/config1.conf
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp 0 0 192.168.112.101:10031 0.0.0.0:* LISTEN 1288/./bin/mongod
tcp 0 0 192.168.112.101:10011 0.0.0.0:* LISTEN 1225/./bin/mongod
tcp 0 0 192.168.112.101:10021 0.0.0.0:* LISTEN 1257/./bin/mongod
  • 节点2

[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1101 1 1 11:14 ? 00:00:02 ./bin/mongod -f ./config/shard12.conf
root 1132 1 1 11:15 ? 00:00:02 ./bin/mongod -f ./config/shard22.conf
root 1163 1 1 11:15 ? 00:00:02 ./bin/mongod -f ./config/config2.conf
[root@tango-centos02 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp 0 0 192.168.112.102:10032 0.0.0.0:* LISTEN 1157/./bin/mongod
tcp 0 0 192.168.112.102:10012 0.0.0.0:* LISTEN 1090/./bin/mongod
tcp 0 0 192.168.112.102:10022 0.0.0.0:* LISTEN 1125/./bin/mongod
  • 节点3

[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1093 1 1 11:14 ? 00:00:02 ./bin/mongod -f ./config/shard13.conf
root 1124 1 1 11:15 ? 00:00:02 ./bin/mongod -f ./config/shard23.conf
root 1155 1 1 11:15 ? 00:00:02 ./bin/mongod -f ./config/config3.conf
[root@tango-centos03 mongodb-linux-x86_64-rhel70-3.6.3]# netstat -lntp|grep mongo
tcp 0 0 192.168.112.103:10033 0.0.0.0:* LISTEN 1156/./bin/mongod
tcp 0 0 192.168.112.103:10013 0.0.0.0:* LISTEN 1089/./bin/mongod
tcp 0 0 192.168.112.103:10023 0.0.0.0:* LISTEN 1121/./bin/mongod
4.4 配置分片副本集环境

最新版本的MongoDB搭建分片副本集环境时候,需要将config server配置为replica set。

4.4.1 配置Config Server Replica Set

1)登录Config Server的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10031

2)初始化replica sets

>cfgsvr={_id:"config-server-rs",configsvr:true,members:[{_id:0,host:"192.168.112.101:10031"},{_id:1,host:"192.168.112.102:10032"},{_id:2,host:"192.168.112.103:10033"}]}
{
"_id" : "config-server-rs",
"configsvr" : true,
"members" : [
{
"_id" : 0,
"host" : "192.168.112.101:10031"
},
{
"_id" : 1,
"host" : "192.168.112.102:10032"
},
{
"_id" : 2,
"host" : "192.168.112.103:10033"
}
]
}
> rs.initiate(cfgsvr)
{
"ok" : 1,
"operationTime" : Timestamp(1527055459, 1),
"$gleStats" : {
"lastOpTime" : Timestamp(1527055459, 1),
"electionId" : ObjectId("000000000000000000000000")
},
"$clusterTime" : {
"clusterTime" : Timestamp(1527055459, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
config-server-rs:SECONDARY>
config-server-rs:PRIMARY>
4.4.2 配置副本集

1)登录副本集1的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10011

2)初始化副本集1的replica sets

>cfg1={_id:"shard-rs-01",members:[{_id:0,host:"192.168.112.101:10011"},{_id:1,host:"192.168.112.102:10012"},{_id:2,host:"192.168.112.103:10013",arbiterOnly:true}]}
> rs.initiate(cfg1)
shard-rs-01:PRIMARY> rs.status()rs.status()
{
"set" : "shard-rs-01",
"date" : ISODate("2018-05-23T06:29:40.794Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.112.101:10011",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2298,
"optime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T06:29:32Z"),
"electionTime" : Timestamp(1527056330, 1),
"electionDate" : ISODate("2018-05-23T06:18:50Z"),
"configVersion" : 3,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.112.102:10012",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 660,
"optime" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1527056972, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T06:29:32Z"),
"optimeDurableDate" : ISODate("2018-05-23T06:29:32Z"),
"lastHeartbeat" : ISODate("2018-05-23T06:29:39.625Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T06:29:39.632Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.112.101:10011",
"configVersion" : 3
},
{
"_id" : 2,
"name" : "192.168.112.103:10013",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 5,
"lastHeartbeat" : ISODate("2018-05-23T06:29:39.624Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T06:29:40.502Z"),
"pingMs" : NumberLong(0),
"configVersion" : 3
}
],
"ok" : 1
}

3)登录副本集2的mongo shell

[root@tango-centos01 ]# ./bin/mongo 192.168.112.101:10021

4)初始化副本集2的replica sets

>cfg2={_id:"shard-rs-02",members:[{_id:0,host:"192.168.112.101:10021"},{_id:1,host:"192.168.112.102:10022"},{_id:2,host:"192.168.112.103:10023",arbiterOnly:true}]}
{
"_id" : "shard-rs-02",
"members" : [
{
"_id" : 0,
"host" : "192.168.112.101:10021"
},
{
"_id" : 1,
"host" : "192.168.112.102:10022"
},
{
"_id" : 2,
"host" : "192.168.112.103:10023",
"arbiterOnly" : true
}
]
}
> rs.initiate(cfg2)
{ "ok" : 1 }
shard-rs-02:PRIMARY> rs.status()rs.status()
{
"set" : "shard-rs-02",
"date" : ISODate("2018-05-23T06:39:46.683Z"),
"myState" : 1,
"term" : NumberLong(1),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1527057586, 1),
"t" : NumberLong(1)
},
"readConcernMajorityOpTime" : {
"ts" : Timestamp(1527057586, 1),
"t" : NumberLong(1)
},
"appliedOpTime" : {
"ts" : Timestamp(1527057586, 1),
"t" : NumberLong(1)
},
"durableOpTime" : {
"ts" : Timestamp(1527057586, 1),
"t" : NumberLong(1)
}
},
"members" : [
{
"_id" : 0,
"name" : "192.168.112.101:10021",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 2862,
"optime" : {
"ts" : Timestamp(1527057586, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T06:39:46Z"),
"electionTime" : Timestamp(1527057425, 1),
"electionDate" : ISODate("2018-05-23T06:37:05Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 1,
"name" : "192.168.112.102:10022",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 172,
"optime" : {
"ts" : Timestamp(1527057576, 1),
"t" : NumberLong(1)
},
"optimeDurable" : {
"ts" : Timestamp(1527057576, 1),
"t" : NumberLong(1)
},
"optimeDate" : ISODate("2018-05-23T06:39:36Z"),
"optimeDurableDate" : ISODate("2018-05-23T06:39:36Z"),
"lastHeartbeat" : ISODate("2018-05-23T06:39:45.728Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T06:39:46.125Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "192.168.112.101:10021",
"configVersion" : 1
},
{
"_id" : 2,
"name" : "192.168.112.103:10023",
"health" : 1,
"state" : 7,
"stateStr" : "ARBITER",
"uptime" : 172,
"lastHeartbeat" : ISODate("2018-05-23T06:39:45.727Z"),
"lastHeartbeatRecv" : ISODate("2018-05-23T06:39:41.742Z"),
"pingMs" : NumberLong(0),
"configVersion" : 1
}
],
"ok" : 1
}
3.4.3 配置分片环境

目前已经搭建了MongoDB配置服务器、路由服务器和各个分片服务器,但是应用程序连接到 Mongos路由服务器并不能使用分片机制,还需要在程序里设置分片配置,让分片生效。

1)启动Mongos

[root@tango-centos01]# ./bin/mongos -f ./config/route1.conf
[root@tango-centos02]# ./bin/mongos -f ./config/route2.conf
[root@tango-centos03]# ./bin/mongos -f ./config/route3.conf

2)连接到mongos

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ./bin/mongo 192.168.112.101:10041
MongoDB shell version v3.6.3
connecting to: mongodb://192.168.112.101:10041/test
MongoDB server version: 3.6.3
mongos>

3)使用admin数据库

mongos> use adminuse admin
switched to db admin

4)串联路由服务器与分配副本集1

mongos> sh.addShard("shard-rs-01/192.168.112.101:10011,192.168.112.102:10012")
{
"shardAdded" : "shard-rs-01",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1527059259, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1527059259, 5)
}

5)串联路由服务器与分配副本集2

mongos> sh.addShard("shard-rs-02/192.168.112.101:10021,192.168.112.102:10022")
{
"shardAdded" : "shard-rs-02",
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1527059432, 5),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1527059432, 5)
}
mongos>

6)查看分片服务器的配置

mongos> db.runCommand({listshards:1})db.runCommand({listshards:1})
{
"shards" : [
{
"_id" : "shard-rs-01",
"host" : "shard-rs-01/192.168.112.101:10011,192.168.112.102:10012",
"state" : 1
},
{
"_id" : "shard-rs-02",
"host" : "shard-rs-02/192.168.112.101:10021,192.168.112.102:10022",
"state" : 1
}
],
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1527059589, 4),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1527059589, 4)
4.4.4 配置分片的表和片键

MongoDB支持2种分区策略:Hashed sharding和Ranged Sharding

  • Hashed Sharding 基于键值计算出hash值,再根据hash值来分片。当sharded key的区间很接近时候,它们的哈希值通常不可能在相同的chunk中。

数据库系列之MongoDB集群环境部署

  • Ranged Sharding 将数据分片键值按照区间保存,每个chunk会分配一定的区间键值。键值相近的数据更有可能在相同的chunk或shard中。

目前配置服务、路由服务、分片服务、副本集服务都已经串联起来了,但我们的目的是希望插入数据,数据能够自动分片,因此需要连接在mongos,准备让指定的数据库、指定的集合分片生效。

1)指定testdb分片生效

mongos> sh.enableSharding("testDB")sh.enableSharding("testDB")
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1527060052, 7),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1527060052, 7)
}

2)指定数据库里需要分片的集合和片键

mongos> sh.shardCollection("testDB.test_tb1",{id:1})
{
"collectionsharded" : "testDB.test_tb1",
"collectionUUID" : UUID("57a23194-2172-4068-ad6f-689f3344590d"),
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1527060322, 15),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1527060322, 15)
}

我们设置testDB的test_tb表需要分片,根据id自动分片到shard1、shard2上面去。要这样设置是因为MongoDB中不是所有数据库和表都需要分片。Shard collection使用sh.shardCollection()函数,需要指定collection的全称以及包含sharding key的文档,同时database必须已经enable sharding功能。Shard key的选择会影响sharding的效率,如果collection中已经包含有数据,在使用shardCollection之前需要使用db.collection().createIndex创建基于shard key的索引,如果为空在shardCollection时会自动创建。

4.5 分片集群测试验证
4.5.1 测试分片配置结果

向collection test_tb1中插入记录,查看分片的分配情况 1)Chunk size默认为64MB,测试需要将其调整为4MB

mongos> db.settings.save({_id:"chunksize",value:4})
WriteResult({ "nMatched" : 0, "nUpserted" : 1, "nModified" : 0, "_id" : "chunksize" })

2)表test_tb1中写入数据

mongos> use testDBuse testDB
switched to db testDB
mongos> for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})
WriteResult({ "nInserted" : 1 })

3)查看表test_tb1的分片情况,数据已经分布在2个sharding上

mongos> db.test_tb1.stats()db.test_tb1.stats()
{
"sharded" : true,
"capped" : false,
"ns" : "testDB.test_tb1",
"count" : 200001,
"size" : 10800054,
"storageSize" : 3493888,
"totalIndexSize" : 4390912,
"indexSizes" : {
"_id_" : 1884160,
"id_1" : 2506752
},
"avgObjSize" : 54,
"nindexes" : 2,
"nchunks" : 5,
"shards" : {
"shard-rs-01" : {
"ns" : "testDB.test_tb1",
"size" : 1740366,
"count" : 32229,
"avgObjSize" : 54,
"storageSize" : 581632,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"shard-rs-02" : {
"ns" : "testDB.test_tb1",
"size" : 9059688,
"count" : 167772,
"avgObjSize" : 54,
"storageSize" : 2912256,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},

4)查看chunks信息

mongos> db.chunks.find()db.chunks.find()
{ "_id" : "config.system.sessions-_id_MinKey", "ns" : "config.system.sessions", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("5b05133fb00ebe2ebc79980e") }
{ "_id" : "testDB.test_tb1-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 2 }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_2.0", "lastmod" : Timestamp(3, 1), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 2 }, "max" : { "id" : 77674 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_77674.0", "lastmod" : Timestamp(2, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 77674 }, "max" : { "id" : 116510 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_116510.0", "lastmod" : Timestamp(2, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 116510 }, "max" : { "id" : 167772 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_167772.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 167772 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01" }
4.5.2 集群高可用验证

1)杀掉shard21进程

[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1257 1 5 13:52 ? 00:13:16 ./bin/mongod -f ./config/shard21.conf
root 1288 1 0 13:52 ? 00:01:45 ./bin/mongod -f ./config/config1.conf
root 1789 1 3 14:53 ? 00:05:34 ./bin/mongos -f ./config/route1.conf
root 2094 1195 4 15:41 pts/0 00:05:38 ./bin/mongo 192.168.112.101:10041
root 2339 1 1 16:14 ? 00:01:18 ./bin/mongod -f ./config/shard11.conf [root@tango-centos01 ~]# kill -9 1257
[root@tango-centos01 mongodb-linux-x86_64-rhel70-3.6.3]# ps -ef|grep mongo
root 1288 1 0 13:52 ? 00:01:45 ./bin/mongod -f ./config/config1.conf
root 1789 1 3 14:53 ? 00:05:34 ./bin/mongos -f ./config/route1.conf
root 2094 1195 4 15:41 pts/0 00:05:38 ./bin/mongo 192.168.112.101:10041
root 2339 1 1 16:14 ? 00:01:18 ./bin/mongod -f ./config/shard11.conf
root 2866 2310 0 17:39 pts/1 00:00:00 grep --color=auto mongo

2)查看sharding状态

3)测试数据插入操作

mongos> for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})for (var i=1;i<100000;i++) db.test_tb1.save({id:i,"test2":"testval2"})
WriteResult({ "nInserted" : 1 })

4)查看sharding状态

mongos> db.test_tb1.stats()db.test_tb1.stats()
{
"sharded" : true,
"capped" : false,
"ns" : "testDB.test_tb1",
"count" : 299999,
"size" : 16199946,
"storageSize" : 7774208,
"totalIndexSize" : 9248768,
"indexSizes" : {
"_id_" : 4734976,
"id_1" : 4513792
},
"avgObjSize" : 54,
"nindexes" : 2,
"nchunks" : 7,
"shards" : {
"shard-rs-01" : {
"ns" : "testDB.test_tb1",
"size" : 1740420,
"count" : 32230,
"avgObjSize" : 54,
"storageSize" : 581632,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"shard-rs-02" : {
"ns" : "testDB.test_tb1",
"size" : 14459526,
"count" : 267769,
"avgObjSize" : 54,
"storageSize" : 7192576,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},

5)查看chunks信息

mongos> db.chunks.find()db.chunks.find()
{ "_id" : "config.system.sessions-_id_MinKey", "ns" : "config.system.sessions", "min" : { "_id" : { "$minKey" : 1 } }, "max" : { "_id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("5b05133fb00ebe2ebc79980e") }
{ "_id" : "testDB.test_tb1-id_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : { "$minKey" : 1 } }, "max" : { "id" : 2 }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_2.0", "lastmod" : Timestamp(3, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 2 }, "max" : { "id" : 23304 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_77674.0", "lastmod" : Timestamp(2, 2), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 77674 }, "max" : { "id" : 116510 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_116510.0", "lastmod" : Timestamp(2, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 116510 }, "max" : { "id" : 167772 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_167772.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 167772 }, "max" : { "id" : { "$maxKey" : 1 } }, "shard" : "shard-rs-01" }
{ "_id" : "testDB.test_tb1-id_23304.0", "lastmod" : Timestamp(3, 3), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 23304 }, "max" : { "id" : 62141 }, "shard" : "shard-rs-02" }
{ "_id" : "testDB.test_tb1-id_62141.0", "lastmod" : Timestamp(3, 4), "lastmodEpoch" : ObjectId("5b0531fbb00ebe2ebc7a9f7e"), "ns" : "testDB.test_tb1", "min" : { "id" : 62141 }, "max" : { "id" : 77674 }, "shard" : "shard-rs-02" }
mongos>

参考资料

  1. MongoDB Manual

  2. 高可用的MongoDB集群