vlambda博客
学习文章列表

MongoDB+Keepalived实现集群高可用

各位扥扥早,包子油条豆浆带一份!

起立!


好了废话不多说了,如果有用请转发出去。

架构图:
Mongodb对数据做分片,使用haproxy做代理,keepalived做IP漂移。

IP分配:

MongoDB+Keepalived实现集群高可用

Mongodb安装

参考:MongoDB安装文档

Haproxy安装

#在192.168.100.67和192.168.100.66操作

1.下载安装包
haproxy-1.4.24

2.开始安装

#2台代理服务器均安装

2.1增加用户

useradd -M -s /sbin/nologin haproxy

2.2安装gcc

yum install gcc -y

2.3解压

tar zxvf haproxy-1.4.24.tar.gz

2.4编译安装

cd ./haproxy-1.4.24
make TARGET=linux26 PREFIX=/main/server/haproxy
make install PREFIX=/main/server/haproxy

2.5配置

mkdir –p /main/server/haproxy/conf
cd /main/server/haproxy/conf
vi haproxy.cfg
##输入以下内容
global
chroot /main/server/haproxy/share/
log 127.0.0.1 local3 info
daemon
user haproxy
group haproxy
pidfile /var/run/haproxy.pid
nbproc 1
stats socket /tmp/haproxy level admin
stats maxconn 20
node master_loadbalance1
description lb1
maxconn 65536
nosplice
spread-checks 3

defaults
log global
mode tcp
option abortonclose
option allbackups
option tcpka
option redispatch
retries 3
timeout check 60s
timeout connect 600s
timeout queue 600s
timeout server 600s
timeout tarpit 60s
timeout client 600s

frontend mongos_pool 0.0.0.0:20001
mode tcp
maxconn 32768
no option dontlognull
option tcplog
log global
option log-separate-errors
default_backend mongos_pool

backend mongos_pool
mode tcp
balance source
default-server inter 2s fastinter 1s downinter 5s slowstart 60s rise 2 fall 5 weight 30

server mongos1 192.168.100.67:20000 check maxconn 2000
server mongos2 192.168.100.66:20000 check maxconn 2000
server mongos2 192.168.100.63:20000 check maxconn 2000

2.6启动

/main/server/haproxy/sbin/haproxy -f /main/server/haproxy/conf/haproxy.cfg

2.7其他命令

#重启命令
/main/server/haproxy/sbin/haproxy -f /main/server/haproxy/conf/haproxy.cfg -st `cat /main/server/haproxy/logs/haproxy.pid`
#停止命令
killall haproxy

2.8测试

#通过20001端口进行访问
/main/mongodbtest/mongodb-linux-x86_64-3.0.2/bin/mongo 192.168.100.67:20001
>use testdb;
#查看分片情况
>db.table1.stats();

3.Keepalived安装
参考:keepalived安装文档 Nginx不用安装

3.1 配置keepalvied

#Master配置

!Configuration File for keepalived

global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id HAPROXY1_DEVEL
}
vrrp_script chk_nginx {
script "/main/server/haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state MASTER
nopreempt
interface eth0
virtual_router_id 31 #同一局域网内不可有2个相同的virtual_router_id
mcast_src_ip 192.168.100.67
priority 150
advert_int 1
authentication {
auth_type PASS
auth_pass fds#FSAF897
}
virtual_ipaddress {
192.168.100.68
}
track_script {
chk_haproxy
}
}

#Slave配置

!Configuration File for keepalived

global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id HAPROXY2_DEVEL
}
vrrp_script chk_nginx {
script "/main/server/haproxy.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
nopreempt
interface eth0
virtual_router_id 31 #同一局域网内不可有2个相同的virtual_router_id
mcast_src_ip 192.168.100.66
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass fds#FSAF897
}
virtual_ipaddress {
192.168.100.68
}
track_script {
chk_haproxy
}
}

4.测试

#keepalived测试

在192.168.100.67上关掉keepalived,查看192.168.100.68是否漂移到192.168.100.66
service keepalived stop
查看日志(192.168.100.66):

MongoDB+Keepalived实现集群高可用

192.168.100.67上启动keepalived,查看192.168.100.68是否漂移到192.168.100.67
service keepalived start
查看日志(192.168.100.67):

查看日志(192.168.100.66):

#Haproxy测试

监测脚本haproxy.cfg说明:
检测haproxy进程数若等于0,则执行启动haproxy命令,然后再次对进程数进行检测,若还是等于0,则关闭keepalived服务,Slave备机接收服务的请求。
为了达到测试效果,在服务都运行正常的情况下进行测试。
1.杀掉haproxy进程,会看到haproxy会通过keepalived指定的监测脚本启动起来。Haproxy进程恢复正常,keepalived并未切换到备机。
进入mongos,查询数据状态,一切正常。
2.移动haproxy的配置文件到其他位置,然后人为杀掉haproxy进程,检测不到haproxy进程会执行监测脚本里的killall keepalived命令,keepalived切换到备机。
进入mongos,查询数据状态,一切正常。
3.恢复haproxy的配置文件,然后启动keepalived,haproxy会通过keepalived启动,备机会切换到Master。
进入mongos,查询数据状态,一切恢复正常。

到了演示最后了,不知道是否帮助到各位?

如果扥扥们有什么问题,可以文章留言。


有事留言,无事点赞,有用转发!