Linux下部署MongoDB4.2版本sharding集群
目录
介绍
——Sharding 组成角色的说明介绍
——Shard Keys
——架构图
不带权限认证的分片集群部署
——步骤
——正式部署
带有权限认证分片的部署
——步骤
——正式部署
——安装规划
——备注
参考资料
介绍
在第一篇文章“”中就大体介绍了sharding集群,架构可以参考一下之前的文章。主要的角色有三个
1、shard :shard就是实际的分片数据存储集群,是副本集。管理的方法和普通的副本集是一样的。在3.6版本之后必须部署为副本集,保证数据的高可用和冗余。
2、mongos:代理进程,这个是面向业务端的,其读取config server里面记录的分片信息,做好数据的路由分片等,实现负载均衡,实际业务中可以部署多个mongos。mongos可以部署到业务服务器上,减少业务端和数据端的网络延时;或者可以部署到指定的服务器上,方便管理。在连接数设置上,shard配置的的最大连接数要远远大于mongos配置的的最大连接数。
3、config server:配置集群,在3.4版本之后必须部署为副本集,保证数据的高可用和冗余。
Sharding 组成角色的说明介绍
A MongoDB sharded cluster consists of the following components:
shard: Each shard contains a subset of the sharded data. Each shard can be deployed as a replica set.As of MongoDB 3.6, shards must be deployed as a replica set to provide redundancy and high availability.Users, clients, or applications should only directly connect to a shard to perform local administrative and maintenance operations.(必须部署为replica set,管理这个shard必须登录到对应的实例上,)
mongos: The mongos acts as a query router, providing an interface between client applications and the sharded cluster.
config servers: Config servers store metadata and configuration settings for the cluster. As of MongoDB 3.4, config servers must be deployed as a replica set (CSRS).
MongoDB shards data at the collection level, distributing the collection data across the shards in the cluster.
Deploying multiple mongos routers supports high availability and scalability. A common pattern is to place a mongos on each application server. Deploying one mongos router on each application server reduces network latency between the application and the router.(将mongos部署到应用服务器上,减少网络交互延迟)
Alternatively, you can place a mongos router on dedicated hosts. Large deployments benefit from this approach because it decouples the number of client application servers from the number of mongos instances. This gives greater control over the number of connections the mongod instances serve.(或者部署mongos到专门的机器上,方便控制连接到mongod的连接数,并且方便集中管理mongos)
Shard Keys
shard key存在于每个collection中,shard key在选择之后就不能够变动了,并且一个sharded collection只能有一个shard key,shard key上必须要有索引或者前缀索引。一个空的collection,MongoDB会自己创建shard key的索引。
To shard a non-empty collection, the collection must have an index that starts with the shard key. For empty collections, MongoDB creates the index if the collection does not already have an appropriate index for the specified shard key. See Shard Key Indexes.
架构图
不带权限认证的分片集群部署
步骤
1)、部署shard1 shard2 shard3 ,分别是副本集(注意配置文件中的:clusterRole: shardsvr)
2)、部署config server 是一个副本集(注意配置文件中的:clusterRole: configsvr)
3)、部署mongos,在mongos里面add shard 添加三个分片,并且开启shard功能
正式部署
1、下载mongodb最新的版本,这里选择使用二进制的版本
wget https://www.percona.com/downloads/percona-server-mongodb-LATEST/percona-server-mongodb-4.2.2-3/binary/tarball/percona-server-mongodb-4.2.2-3-centos7-x86_64.tar.gz
2、新增mongod的用户
useradd mongod
3、安装规划
主机 |
ip |
端口信息 |
host1 |
10.255.0.96 |
mongod shard11:27018(replica set1) mongod shard12:27019(replica set1) mongod shard13:27020(replica set1) mongod config1:20000(replica config) mongod config2:20001(replica config) mongod config3:20002(replica config) mongos1:27017 |
host2 |
10.255.0.223 |
mongod shard21:27018(replica set2) mongod shard22:27019(replica set2) mongod shard23:27020(replica set2) mongos2:27017 |
4、创建目录存放mongodb bin文件
tar -xzvf percona-server-mongodb-4.2.2-3-centos7-x86_64.tar.gz
cp -rp percona-server-mongodb-4.2.2-3 /usr/local/src/mongodb
vim ~/.bash_profile添加/usr/local/src/mongodb/bin 到PATH中
source ~/.bash_profile
5、创建sharding数据目录
测试集群,直接在一台机器上部署同一个replica的三个实例就行
1、创建数据文件
[root@host1 mongodb]# mkdir -p /home/services/mongodb
[root@host1 mongodb]# cd /home/services/mongodb
[root@host1 mongodb]# mkdir -p data/shard11
[root@host1 mongodb]# mkdir -p data/shard12
[root@host1 mongodb]# mkdir -p data/shard13
2、编辑配置文件和启动replica set1 的几个端口
mkdir /home/services/mongodb/conf
vim 27018.conf #编辑配置文件,注意几个端口的配置文件,修改对应端口和文件目录
net:
port: 27018
systemLog:
path: /home/services/mongodb/data/shard11.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/shard11.pid
storage:
dbPath: /home/services/mongodb/data/shard11
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 0.1
operationProfiling:
slowOpThresholdMs: 50
mode: slowOp
sharding:
clusterRole: shardsvr
replication:
replSetName: shard1
mongod --config /home/services/mongodb/conf/27018.conf
mongod --config /home/services/mongodb/conf/27019.conf
mongod --config /home/services/mongodb/conf/27020.conf
3、初始化replica set 1
连接其中一个端口:mongo 10.255.0.96:27018
>config={"_id":"shard1","members" : [{"_id" : 0,"host" : "10.255.0.96:27018"},{"_id" : 1,"host" : "10.255.0.96:27019"},{"_id" : 2,"host" : "10.255.0.96:27020"}]}
{
"_id" : "shard1",
"members" : [
{
"_id" : 0,
"host" : "10.255.0.96:27018"
},
{
"_id" : 1,
"host" : "10.255.0.96:27019"
},
{
"_id" : 2,
"host" : "10.255.0.96:27020"
}
]
}
> rs.initiate(config);
{
"ok" : 1,
"$clusterTime" : {
"clusterTime" : Timestamp(1579754704, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
},
"operationTime" : Timestamp(1579754704, 1)
}
6、相同的方法在host2上部署shard2
#注意在host2上,并且需要修改配置文件中的端口和数据文件目录
mkdir /home/services/mongodb/data/shard21
mkdir /home/services/mongodb/data/shard22
mkdir /home/services/mongodb/data/shard23
#注意修改配置文件中的shard名称
replication:
replSetName: shard1
:
replSetName: shard2
7、相同的方法部署configserver replica
mkdir /home/services/mongodb/data/config11
mkdir /home/services/mongodb/data/config12
mkdir /home/services/mongodb/data/config13
#配置文件中sharding部分和shard节点的不同,需要修改
sharding:
clusterRole: configsvr
#并且修改:replication:
replSetName: config
#配置文件内容
net:
port: 20002
systemLog:
path: /home/services/mongodb/data/config12.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/config12.pid
storage:
dbPath: /home/services/mongodb/data/config12
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 0.1
operationProfiling:
slowOpThresholdMs: 50
mode: slowOp
sharding:
clusterRole: configsvr
replication:
replSetName: config
8、启动两个mongos进程
注意配置文件中configDB,配置的是上面部署的config server
配置文件:/home/services/mongodb/conf/27017.conf
net:
port: 27017
systemLog:
path: /home/services/mongodb/data/mongos.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/mongos.pid
sharding:
configDB: config/10.255.0.96:20000,10.255.0.96:20001,10.255.0.96:20002
9、添加分片
连接某一个mongos进程
mongo 10.255.0.96:27017
use admin
#将两个shard replica 加入到这个sharding集群中
db.runCommand( { addshard:"shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020",name:"s1"});
db.runCommand( { addshard:"shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020",name:"s2"});
10、查看sharding信息
mongos> db.runCommand( { listshards : 1 } )
{
"shards" : [
{
"_id" : "s2",
"host" : "shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020",
"state" : 1
},
{
"_id" : "s1",
"host" : "shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020",
"state" : 1
}
],
"ok" : 1,
"operationTime" : Timestamp(1579761779, 2),
"$clusterTime" : {
"clusterTime" : Timestamp(1579761779, 2),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
11、激活分片
mongos> db.runCommand({enablesharding:"test2"})
将被存放在不同的shard上,但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对collection作些操作
{
"ok" : 1,
"operationTime" : Timestamp(1579761925, 6),
"$clusterTime" : {
"clusterTime" : Timestamp(1579761925, 6),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> db.runCommand( { shardcollection : "test2.books", key : { id : 1 } } );
{
"collectionsharded" : "test2.books",
"collectionUUID" : UUID("a9a8138b-be7c-45f4-ba7d-390d556a0b1f"),
"ok" : 1,
"operationTime" : Timestamp(1579761989, 13),
"$clusterTime" : {
"clusterTime" : Timestamp(1579761989, 13),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
12、查看创建库的分片信息
mongos> show dbs
admin 0.000GB
config 0.002GB
test2 0.000GB
# This reports includes which shard is primary for the database and the chunk distribution across
#the shards.
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5e2936bef9b3c7536b84a710")
}
shards:
{ "_id" : "s1", "host" : "shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020", "state" : 1 }
{ "_id" : "s2", "host" : "shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020", "state" : 1 }
active mongoses:
"4.2.2-3" : 2
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
s2 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : s2 Timestamp(1, 0)
{ "_id" : "test2", "primary" : "s1", "partitioned" : true, "version" : { "uuid" : UUID("df933288-ad00-49d3-83ed-37f5543915ae"), "lastMod" : 1 } }
test2.books
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
s1 1
{ "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : s1 Timestamp(1, 0)
mongos> db.stats()
{
"raw" : {
"shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020" : {
"db" : "tests",
"collections" : 0,
"views" : 0,
"objects" : 0,
"avgObjSize" : 0,
"dataSize" : 0,
"storageSize" : 0,
"numExtents" : 0,
"indexes" : 0,
"indexSize" : 0,
"scaleFactor" : 1,
"fileSize" : 0,
"fsUsedSize" : 0,
"fsTotalSize" : 0,
"ok" : 1
},
"shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020" : {
"db" : "tests",
"collections" : 0,
"views" : 0,
"objects" : 0,
"avgObjSize" : 0,
"dataSize" : 0,
"storageSize" : 0,
"numExtents" : 0,
"indexes" : 0,
"indexSize" : 0,
"scaleFactor" : 1,
"fileSize" : 0,
"fsUsedSize" : 0,
"fsTotalSize" : 0,
"ok" : 1
}
},
"objects" : 0,
"avgObjSize" : 0,
"dataSize" : 0,
"storageSize" : 0,
"numExtents" : 0,
"indexes" : 0,
"indexSize" : 0,
"scaleFactor" : 1,
"fileSize" : 0,
"ok" : 1,
"operationTime" : Timestamp(1579767088, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1579767092, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> db.books.stats();
{
"sharded" : true,
"capped" : false,
"wiredTiger" : {
"metadata" : {
"formatVersion" : 1
},
"creationString":.......
.......
.......
.......
},
"ok" : 1,
"operationTime" : Timestamp(1579767183, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1579767183, 1),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
13、测试插入数据
mongos> for (var i = 1; i <= 20000; i++) db.books.save({id:i,name:"12345678",sex:"male",age:27,value:"test"});
WriteResult({ "nInserted" : 1 })
mongos> for (var i = 20000; i <= 40000; i++) db.books.save({id:i,name:"12345678",sex:"male",age:27,value:"test"});
WriteResult({ "nInserted" : 1 })
mongos> for(var i=1;i<=40;i++) { sh.splitAt('test2.books',{id:i*1000}) }
{
"ok" : 1,
"operationTime" : Timestamp(1579866076, 1),
"$clusterTime" : {
"clusterTime" : Timestamp(1579866083, 95),
"signature" : {
"hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
"keyId" : NumberLong(0)
}
}
}
mongos> sh.status({"verbose":1})
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5e2936bef9b3c7536b84a710")
}
shards:
{ "_id" : "s1", "host" : "shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020", "state" : 1 }
{ "_id" : "s2", "host" : "shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020", "state" : 1 }
active mongoses:
{ "_id" : "host2:27017", "advisoryHostFQDNs" : [ ], "mongoVersion" : "4.2.2-3", "ping" : ISODate("2020-01-24T11:43:13.278Z"), "up" : NumberLong(105251), "waiting" : true }
{ "_id" : "host1:27017", "advisoryHostFQDNs" : [ ], "mongoVersion" : "4.2.2-3", "ping" : ISODate("2020-01-24T11:43:13.018Z"), "up" : NumberLong(105377), "waiting" : true }
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
20 : Success
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
s2 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : s2 Timestamp(1, 0)
{ "_id" : "test2", "primary" : "s1", "partitioned" : true, "version" : { "uuid" : UUID("df933288-ad00-49d3-83ed-37f5543915ae"), "lastMod" : 1 } }
test2.books
shard key: { "id" : 1 }
unique: false
balancing: true
chunks:
s1 21
s2 20
{ "id" : { "$minKey" : 1 } } -->> { "id" : 1000 } on : s2 Timestamp(42, 0)
{ "id" : 1000 } -->> { "id" : 2000 } on : s2 Timestamp(43, 0)
{ "id" : 2000 } -->> { "id" : 3000 } on : s2 Timestamp(44, 0)
{ "id" : 3000 } -->> { "id" : 4000 } on : s2 Timestamp(45, 0)
{ "id" : 4000 } -->> { "id" : 5000 } on : s2 Timestamp(46, 0)
{ "id" : 5000 } -->> { "id" : 6000 } on : s2 Timestamp(47, 0)
{ "id" : 6000 } -->> { "id" : 7000 } on : s2 Timestamp(48, 0)
{ "id" : 7000 } -->> { "id" : 8000 } on : s2 Timestamp(49, 0)
{ "id" : 8000 } -->> { "id" : 9000 } on : s2 Timestamp(50, 0)
{ "id" : 9000 } -->> { "id" : 10000 } on : s2 Timestamp(51, 0)
{ "id" : 10000 } -->> { "id" : 11000 } on : s2 Timestamp(52, 0)
{ "id" : 11000 } -->> { "id" : 12000 } on : s2 Timestamp(53, 0)
{ "id" : 12000 } -->> { "id" : 13000 } on : s2 Timestamp(54, 0)
{ "id" : 13000 } -->> { "id" : 14000 } on : s2 Timestamp(55, 0)
{ "id" : 14000 } -->> { "id" : 15000 } on : s2 Timestamp(56, 0)
{ "id" : 15000 } -->> { "id" : 16000 } on : s2 Timestamp(57, 0)
{ "id" : 16000 } -->> { "id" : 17000 } on : s2 Timestamp(58, 0)
{ "id" : 17000 } -->> { "id" : 18000 } on : s2 Timestamp(59, 0)
{ "id" : 18000 } -->> { "id" : 19000 } on : s2 Timestamp(60, 0)
{ "id" : 19000 } -->> { "id" : 20000 } on : s2 Timestamp(61, 0)
{ "id" : 20000 } -->> { "id" : 21000 } on : s1 Timestamp(61, 1)
{ "id" : 21000 } -->> { "id" : 22000 } on : s1 Timestamp(23, 1)
{ "id" : 22000 } -->> { "id" : 23000 } on : s1 Timestamp(24, 1)
{ "id" : 23000 } -->> { "id" : 24000 } on : s1 Timestamp(25, 1)
{ "id" : 24000 } -->> { "id" : 25000 } on : s1 Timestamp(26, 1)
{ "id" : 25000 } -->> { "id" : 26000 } on : s1 Timestamp(27, 1)
{ "id" : 26000 } -->> { "id" : 27000 } on : s1 Timestamp(28, 1)
{ "id" : 27000 } -->> { "id" : 28000 } on : s1 Timestamp(29, 1)
{ "id" : 28000 } -->> { "id" : 29000 } on : s1 Timestamp(30, 1)
{ "id" : 29000 } -->> { "id" : 30000 } on : s1 Timestamp(31, 1)
{ "id" : 30000 } -->> { "id" : 31000 } on : s1 Timestamp(32, 1)
{ "id" : 31000 } -->> { "id" : 32000 } on : s1 Timestamp(33, 1)
{ "id" : 32000 } -->> { "id" : 33000 } on : s1 Timestamp(34, 1)
{ "id" : 33000 } -->> { "id" : 34000 } on : s1 Timestamp(35, 1)
{ "id" : 34000 } -->> { "id" : 35000 } on : s1 Timestamp(36, 1)
{ "id" : 35000 } -->> { "id" : 36000 } on : s1 Timestamp(37, 1)
{ "id" : 36000 } -->> { "id" : 37000 } on : s1 Timestamp(38, 1)
{ "id" : 37000 } -->> { "id" : 38000 } on : s1 Timestamp(39, 1)
{ "id" : 38000 } -->> { "id" : 39000 } on : s1 Timestamp(40, 1)
{ "id" : 39000 } -->> { "id" : 40000 } on : s1 Timestamp(41, 1)
{ "id" : 40000 } -->> { "id" : { "$maxKey" : 1 } } on : s1 Timestamp(41, 2)
带有权限认证分片的部署
步骤
1、创建系统账号mongod、目录等
2、编辑配置文件,启动实例,并且初始化集群,config server和shard server都要创建管理员用户、监控账号,业务账号可以只通过mongos创建
#管理员账号
db.createUser({user:"dbaAdmin", pwd: "wbN92MZIMLucrGZ05A6E1kvtOma", roles: [ "root" ] });
#使用管理员账号连接授权
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
#业务账号和监控账号:db.createUser({user:"cluster_user", pwd:"cluster_password", roles: [ {role:"readWrite",db:"cluster"}]})
db.createUser({user:"monitor_user", pwd:"monitor_password", roles: [ {role:"read",db:"local"},{role:"clusterMonitor",db:"admin"}]})
3、配置mongos,启动mongos,连接mongos,之后添加shard
#添加shard
db.runCommand( { addshard:"shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020",name:"s1"});
db.runCommand( { addshard:"shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020",name:"s2"});
#mongos的授权操作的其实是config server,所以在开始部署config server的时候直接授权,在mongos的时候就不授权了
#使用管理员账号连接授权
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
#enable shard,查看状态
sh.enableSharding ("zhujzhuo")
sh.status()
正式部署
安装规划
主机 |
ip |
端口信息 |
host1 |
10.255.0.96 |
mongod shard11:27018(replica set1) mongod shard12:27019(replica set1) mongod shard13:27020(replica set1) mongod config1:20000(replica config) mongod config2:20001(replica config) mongod config3:20002(replica config) mongos1:27017 |
host2 |
10.255.0.223 |
mongod shard21:27018(replica set2) mongod shard22:27019(replica set2) mongod shard23:27020(replica set2) mongos2:27017 |
注:部署和不带认证时候一样,创建用户、目录,编辑配置文件,只是在配置文件部分有区别,带了认证
1、创建目录,编辑配置文件【注意配置文件中的端口、目录路径、replSetName 和 clusterRole 需要修改】
#机器host1上 config server
mkdir -p /home/services/mongodb/data/config11
mkdir -p /home/services/mongodb/data/config12
mkdir -p /home/services/mongodb/data/config13
#机器host1上 shard server1
mkdir -p /home/services/mongodb/data/shard11
mkdir -p /home/services/mongodb/data/shard12
mkdir -p /home/services/mongodb/data/shard13
#机器host2上 shard server2
mkdir -p /home/services/mongodb/data/shard21
mkdir -p /home/services/mongodb/data/shard22
mkdir -p /home/services/mongodb/data/shard23
这里有一个config server副本集,两个shard server 副本集
server:
configsvr【20000.conf 20001.conf 20002.conf】 :
net:
port: 20000
bindIpAll: true
systemLog:
path: /home/services/mongodb/data/config11.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/config11.pid
storage:
dbPath: /home/services/mongodb/data/config11
directoryPerDB: true
journal:
enabled: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 0.01
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
operationProfiling:
slowOpThresholdMs: 50
mode: slowOp
sharding:
clusterRole: configsvr
replication:
replSetName: config
security:
keyFile: /data/mongodbdata/keys/keyfile
clusterAuthMode: keyFile
authorization: enabled
server
shardsvr【27018.conf 27019.conf 27020.conf】 :
net:
port: 27018
bindIpAll: true
systemLog:
path: /home/services/mongodb/data/shard11.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/shard11.pid
storage:
dbPath: /home/services/mongodb/data/shard11
directoryPerDB: true
journal:
enabled: true
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 0.01
directoryForIndexes: true
collectionConfig:
blockCompressor: snappy
indexConfig:
prefixCompression: true
operationProfiling:
slowOpThresholdMs: 50
mode: slowOp
sharding:
clusterRole: shardsvr
replication:
replSetName: shard1
security:
keyFile: /data/mongodbdata/keys/keyfile
clusterAuthMode: keyFile
authorization: enabled
2、启动config server和shard server1 、shard server2
#机器host1上 config server
mongod -f /home/services/mongodb/conf/20000.conf
mongod -f /home/services/mongodb/conf/20001.conf
mongod -f /home/services/mongodb/conf/20002.conf
#机器host1上 shard server1
mongod -f /home/services/mongodb/conf/27018.conf
mongod -f /home/services/mongodb/conf/27019.conf
mongod -f /home/services/mongodb/conf/27020.conf
#机器host2上 shard server2
mongod -f /home/services/mongodb/conf/27018.conf
mongod -f /home/services/mongodb/conf/27019.conf
mongod -f /home/services/mongodb/conf/27020.conf
3、每个副本集中连接一个实例,初始化集群,授权管理员账号,监控账号,业务账号等
#config server
mongo localhost:20000
config={"_id":"config","members" : [{"_id" : 0,"host" : "10.255.0.96:20000"},{"_id" : 1,"host" : "10.255.0.96:20001"},{"_id" : 2,"host" : "10.255.0.96:20002"}]}
rs.initiate(config);
#config server 上可以不创建账号,后面部署了mongos之后,通过mongos创建管理员账号和业务账号,监控账号,或者这里创建了,部署mongos的时候不创建
#管理员账号
db.createUser({user:"dbaAdmin", pwd: "wbN92MZIMLucrGZ05A6E1kvtOma", roles: [ "root" ] });
#使用管理员账号连接授权
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
#业务账号和监控账号:db.createUser({user:"cluster_user", pwd:"cluster_password", roles: [ {role:"readWrite",db:"cluster"}]})
db.createUser({user:"monitor_user", pwd:"monitor_password", roles: [ {role:"read",db:"local"},{role:"clusterMonitor",db:"admin"}]})
#shard server1
config={"_id":"shard1","members" : [{"_id" : 0,"host" : "10.255.0.96:27018"},{"_id" : 1,"host" : "10.255.0.96:27019"},{"_id" : 2,"host" : "10.255.0.96:27020"}]}
rs.initiate(config);
#管理员账号
db.createUser({user:"dbaAdmin", pwd: "wbN92MZIMLucrGZ05A6E1kvtOma", roles: [ "root" ] });
#使用管理员账号连接授权
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
#业务账号和监控账号:db.createUser({user:"cluster_user", pwd:"cluster_password", roles: [ {role:"readWrite",db:"cluster"}]})
db.createUser({user:"monitor_user", pwd:"monitor_password", roles: [ {role:"read",db:"local"},{role:"clusterMonitor",db:"admin"}]})
#shard server2
config={"_id":"shard2","members" : [{"_id" : 0,"host" : "10.255.0.223:27018"},{"_id" : 1,"host" : "10.255.0.223:27019"},{"_id" : 2,"host" : "10.255.0.223:27020"}]}
rs.initiate(config);
#管理员账号
db.createUser({user:"dbaAdmin", pwd: "wbN92MZIMLucrGZ05A6E1kvtOma", roles: [ "root" ] });
#使用管理员账号连接授权
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
#业务账号和监控账号:db.createUser({user:"cluster_user", pwd:"cluster_password", roles: [ {role:"readWrite",db:"cluster"}]})
db.createUser({user:"monitor_user", pwd:"monitor_password", roles: [ {role:"read",db:"local"},{role:"clusterMonitor",db:"admin"}]})
4、配置mongos,启动mongos
net:
port: 27017
bindIpAll: true
systemLog:
path: /home/services/mongodb/data/mongos.log
logRotate: rename
destination: file
processManagement:
fork: true
pidFilePath: /home/services/mongodb/data/mongos.pid
sharding:
configDB: config/10.255.0.96:20000,10.255.0.96:20001,10.255.0.96:20002
security:
keyFile: /data/mongodbdata/keys/keyfile
clusterAuthMode: keyFile
#authorization: enabled 注意这里没有这行配置
5、连接mongos,添加shard,并且创建admin账号和业务账号,监控账号
mongo localhost:27017
db.runCommand( { addshard:"shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020",name:"s1"});
db.runCommand( { addshard:"shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020",name:"s2"});
db.auth('dbaAdmin','wbN92MZIMLucrGZ05A6E1kvtOma')
sh.enableSharding ("zhujzhuo")
sh.status()
mongos> sh.enableSharding ("zhujzhuo")
{
"ok" : 1,
"operationTime" : Timestamp(1582115909, 5),
"$clusterTime" : {
"clusterTime" : Timestamp(1582115909, 5),
"signature" : {
"hash" : BinData(0,"ybIOUIxzqejr6+VBgboGSFXqBwU="),
"keyId" : NumberLong("6795133351742144516")
}
}
}
mongos> db.runCommand( { shardcollection : "zhujzhuo.books", key : { id : 1 } } );
mongos> for (var i = 1; i <= 20000; i++) db.books.save({id:i,name:"12345678",sex:"male",age:27,value:"test"});
WriteResult({ "nInserted" : 1 })
mongos> sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5e4d29c8d20d6b25e2a363cf")
}
shards:
{ "_id" : "s1", "host" : "shard1/10.255.0.96:27018,10.255.0.96:27019,10.255.0.96:27020", "state" : 1 }
{ "_id" : "s2", "host" : "shard2/10.255.0.223:27018,10.255.0.223:27019,10.255.0.223:27020", "state" : 1 }
active mongoses:
"4.2.2-3" : 1
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 0
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "config", "primary" : "config", "partitioned" : true }
{ "_id" : "zhujzhuo", "primary" : "s1", "partitioned" : true, "version" : { "uuid" : UUID("7e67fc8f-6ee1-4240-af08-7f058b5e9b87"), "lastMod" : 1 } }
备注
config server、shard server 部署除了配置文件很小的不同之外,其他都一样
#shard server
sharding:
clusterRole: shardsvr
replication:
replSetName: shardname
#config server
sharding:
clusterRole: configsvr
replication:
replSetName: configname
mongos的授权操作的其实是config server,所以在开始部署config server的时候直接授权也行,在mongos的时候就不授权了
理论上,shard server上只需要一些管理账号、监控账号,业务账号在mongos上授权,mongos对外提供服务和认证
参考资料
https://docs.mongodb.com/manual/sharding/
https://docs.mongodb.com/manual/reference/configuration-options/#configuration-file