vlambda博客
学习文章列表

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

要解决的系统问题

1、 解决物理机不够用的问题 2、 解决物理机资源使用不充分问题 3、 解决系统高可用问题 4、 解决不停机更新问题

系统部署准备工作


文末有:3625页互联网大厂面试题

系统部署方案设计图

相关概念

1 LVS

2 Keepalived作用

3 keepalived和其工作原理

4 VRRP协议:Virtual Route

5 VRRP的工作流程

6 Docker

7 Nginx

开始部署

安装Docker

1 卸载旧版本Docker,系统未安装则可跳过

  
    
    
  
  1. sudo apt-get remove docker docker-engine docker.io containerd runc

2 更新索引列表

  
    
    
  
  1. sudo apt-get update

3 允许apt通过https使用repository安装软件包

  
    
    
  
  1. sudo apt-get-y install apt-transport-https ca-certificates curl software-properties-common

4 安装GPG证书

  
    
    
  
  1. sudo curl -fsSL https://mirrors.aliyun.com/docker-ce/linux/ubuntu/gpg | apt-key add -

5 验证key的指纹

  
    
    
  
  1. sudo apt-key fingerprint 0EBFCD88

6 添加稳定的仓库并更新索引

  
    
    
  
  1. sudo add-apt-repository "deb [arch=amd64] https://mirrors.aliyun.com/docker-ce/linux/ubuntu $(lsb_release -cs) stable" sudo apt-get update

7 查看docker版本列表

  
    
    
  
  1. apt-cache madison docker-ce

8 下载自定义版本docker

  
    
    
  
  1. sudo apt-get install -y docker-ce=17.12.1~ce-0~ubuntu

9 验证docker 是否安装成功

  
    
    
  
  1. docker --version

10 将root用户加入docker 组,以允许免sudo执行docker

  
    
    
  
  1. sudo gpasswd -a 用户名 docker  #用户名改成自己的登录名

11 重启服务并刷新docker组成员,到此完成

  
    
    
  
  1. sudo service docker restartnewgrp - docker

Docker自定义网络

  
    
    
  
  1. docker network create --subnet=172.18.0.0/24 mynet

使用ifconfig查看我们创建的网络

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

宿主机安装Keepalived

1 预装编译环境

  
    
    
  
  1. sudo apt-get install -y gcc

  2. sudo apt-get install -y g++

  3. sudo apt-get install -y libssl-dev

  4. sudo apt-get install -y daemon

  5. sudo apt-get install -y make

  6. sudo apt-get install -y sysv-rc-conf

2 下载并安装keepalived

  
    
    
  
  1. cd /usr/local/

  2. wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz

  3. tar zxvf keepalived-1.2.18.tar.gz

  4. cd keepalived-1.2.18

  5. ./configure --prefix=/usr/local/keepalived

  6. make && make insta

3 将keepalived设置为系统服务

  
    
    
  
  1. mkdir /etc/keepalived

  2. mkdir /etc/sysconfig

  3. cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

  4. cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

  5. cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

  6. ln -s /usr/local/sbin/keepalived /usr/sbin/

  7. ln -s /usr/local/keepalived/sbin/keepalived /sbin/

4 修改keepalived启动的配置文件

  
    
    
  
  1. #!/bin/sh

  2. #

  3. # Startup script for the Keepalived daemon

  4. #

  5. # processname: keepalived

  6. # pidfile: /var/run/keepalived.pid

  7. # config: /etc/keepalived/keepalived.conf

  8. # chkconfig: - 21 79

  9. # description: Start and stop Keepalived

  10. # Source function library

  11. #. /etc/rc.d/init.d/functions

  12. . /lib/lsb/init-functions

  13. # Source configuration file (we set KEEPALIVED_OPTIONS there)

  14. . /etc/sysconfig/keepalived

  15. RETVAL=0

  16. prog="keepalived"

  17. start() {

  18.    echo -n $"Starting $prog: "

  19.    daemon keepalived start

  20.    RETVAL=$?

  21.    echo

  22.    [ $RETVAL -eq 0] && touch /var/lock/subsys/$prog

  23. }

  24. stop() {

  25.    echo -n $"Stopping $prog: "

  26.    killproc keepalived

  27.    RETVAL=$?

  28.    echo

  29.    [ $RETVAL -eq 0] && rm -f /var/lock/subsys/$prog

  30. }

  31. reload() {

  32.    echo -n $"Reloading $prog: "

  33.    killproc keepalived -1

  34.    RETVAL=$?

  35.    echo

  36. }

  37. # See how we were called.

  38. case"$1"in

  39.    start)

  40.        start

  41.        ;;

  42.    stop)

  43.        stop

  44.        ;;

  45.    reload)

  46.        reload

  47.        ;;

  48.    restart)

  49.        stop

  50.        start

  51.        ;;

  52.    condrestart)

  53.        if[ -f /var/lock/subsys/$prog ]; then

  54.            stop

  55.            start

  56.        fi

  57.        ;;

  58.    status)

  59.        status keepalived

  60.        RETVAL=$?

  61.        ;;

  62.    *)

  63.        echo "Usage: $0 {start|stop|reload|restart|condrestart|status}"

  64.        RETVAL=1

  65. esac

  66. exit

5 修改keepalived配置文件

  
    
    
  
  1. cd /etc/keepalived

  2. cp keepalived.conf keepalived.conf.back

  3. rm keepalived.conf

  4. vim keepalived.conf

添加内容如下

  
    
    
  
  1. vrrp_instance VI_1 {

  2.    state MASTER

  3.    interface ens33

  4.    virtual_router_id 51

  5.    priority 100

  6.    advert_int 1

  7.    authentication {

  8.        auth_type PASS

  9.        auth_pass 1111

  10.    }

  11.    virtual_ipaddress {

  12.        192.168.227.88

  13.        192.168.227.99

  14.    }

  15. }

  16. virtual_server  192.168.227.8880{

  17.    delay_loop 6

  18.    lb_algo rr

  19.    lb_kind NAT

  20.    persistence_timeout 50

  21.    protocol TCP

  22.     real_server 172.18.0.210{

  23.            weight 1

  24.     }

  25. }

  26. virtual_server  192.168.227.9980{

  27.    delay_loop 6

  28.    lb_algo rr

  29.    lb_kind NAT

  30.    persistence_timeout 50

  31.    protocol TCP

  32.     real_server 172.18.0.220{

  33.            weight 1

需要注意的是:interface 这个是根据自己服务器网卡名称设定的,不然无法做VIP映射

6 启动keepalived

  
    
    
  
  1. systemctl daemon-reload

  2. systemctl enable keepalived.service

  3. systemctl start keepalived.service

每次更改配置文件之后必须执行 systemctl daemon-reload 操作,不然配置无效。

7 查看keepalived进程是否存在

  
    
    
  
  1. ps -ef|grep keepalived

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

8 查看keepalived运行状态

  
    
    
  
  1. systemctl status keepalived.service

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

9 查看虚拟IP是否完成映射

  
    
    
  
  1. ip addr

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

10 Ping下两个IP

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

可以看到两个IP都是通的,到此keepalived安装成功

Docker容器实现前端主从热备系统

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

图中访问的IP应该是容器内部虚拟出来的172.18.0.210,此处更正说明下。

接下来我们安装前端服务器的主从系统部分

1 拉取centos7镜像

  
    
    
  
  1. docker pull centos:7

2 创建容器

  
    
    
  
  1. docker run -it -d --name centos1 -d centos:7

3 进入容器centos1

  
    
    
  
  1. docker exec-it centos1 bash

4 安装常用工具

  
    
    
  
  1. yum update -y

  2. yum install -y vim

  3. yum install -y wget

  4. yum install -y  gcc-c++  

  5. yum install -y pcre pcre-devel  

  6. yum install -y zlib zlib-devel  

  7. yum install -y  openssl-devel

  8. yum install -y popt-devel

  9. yum install -y initscripts

  10. yum install -y net-tools

5 将容器打包成新的镜像,以后直接以该镜像创建容器

  
    
    
  
  1. docker commit -a 'cfh'-m 'centos with common tools' centos1 centos_base

6 删除之前创建的centos1 容器,重新以基础镜像创建容器,安装keepalived+nginx

  
    
    
  
  1. docker rm -f centos1

  2. #容器内需要使用systemctl服务,需要加上/usr/sbin/init

  3. docker run -it --name centos_temp -d --privileged centos_base /usr/sbin/init

  4. docker exec-it centos_temp bash

7 安装nginx

  
    
    
  
  1. #使用yum安装nginx需要包括Nginx的库,安装Nginx的库

  2. rpm -Uvh http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm

  3. # 使用下面命令安装nginx

  4. yum install -y nginx

  5. #启动nginx

  6. systemctl start nginx.service

8 安装keepalived

  
    
    
  
  1. 1.下载keepalived

  2. wget http://www.keepalived.org/software/keepalived-1.2.18.tar.gz

  3. 2.解压安装:

  4. tar -zxvf keepalived-1.2.18.tar.gz -C /usr/local/

  5. 3.下载插件openssl

  6. yum install -y openssl openssl-devel(需要安装一个软件包)

  7. 4.开始编译keepalived

  8. cd  /usr/local/keepalived-1.2.18/ && ./configure --prefix=/usr/local/keepalived

  9. 5.make一下

  10. make && make ins

9 将keepalived 安装成系统服务

  
    
    
  
  1. mkdir /etc/keepalived

  2. cp /usr/local/keepalived/etc/keepalived/keepalived.conf  /etc/keepalived/

  3. cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

  4. cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

  5. ln -s /usr/local/sbin/keepalived /usr/sbin/

  6. 可以设置开机启动:chkconfig keepalived on

  7. 到此我们安装完毕!

  8. #若启动报错,则执行下面命令

  9. cd /usr/sbin/

  10. rm -f keepalived  

  11. cp /usr/local/keepalived/sbin/keepalived  /usr/sbin/

  12. #启动keepalived

  13. systemctl daemon-reload  重新加载

  14. systemctl enable keepalived.service  设置开机自动启动

  15. systemctl start keepalived.service 启动

  16. systemctl status keepalived.service  查看服务状

10 修改/etc/keepalived/keepalived.conf文件

  
    
    
  
  1. #备份配置文件

  2. cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.backup

  3. rm -f keepalived.conf

  4. vim keepalived.conf

  5. #配置文件如下

  6. vrrp_script chk_nginx {

  7.    script "/etc/keepalived/nginx_check.sh"

  8.    interval 2

  9.    weight -20

  10. }

  11. vrrp_instance VI_1 {

  12.    state MASTER

  13.    interface eth0

  14.    virtual_router_id 121

  15.    mcast_src_ip 172.18.0.201

  16.    priority 100

  17.    nopreempt

  18.    advert_int 1

  19.    authentication {

  20.        auth_type PASS

  21.        auth_pass 1111

  22.    }

  23.    track_script {

  24.        chk_nginx

  25.    }

  26.    virtual_ipaddress {

  27.        172.18.0.210

  28.    }

11 修改nginx的配置文件

vim /etc/nginx/conf.d/default.conf

  
    
    
  
  1. upstream tomcat{

  2.  server 172.18.0.11:80;

  3.  server 172.18.0.12:80;

  4.  server 172.18.0.13:80;

  5. }

  6. server {

  7.    listen       80;

  8.    server_name  172.18.0.210;

  9.    #charset koi8-r;

  10.    #access_log  /var/log/nginx/host.access.log  main;

  11.    location / {

  12.        proxy_pass http://tomcat;

  13.        index index.html index.html;

  14.    }

  15.    #error_page  404              /404.html;

  16.    # redirect server error pages to the static page /50x.html

  17.    #

  18.    error_page   500502503504  /50x.html;

  19.    location = /50x.html {

  20.        root   /usr/share/nginx/html;

  21.    }

  22.    # proxy the PHP scripts to Apache listening on 127.0.0.1:80

  23.    #

  24.    #location ~ \.php$ {

  25.    #    proxy_pass   http://127.0.0.1;

  26.    #}

  27.    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

  28.    #

  29.    #location ~ \.php$ {

  30.    #    root           html;

  31.    #    fastcgi_pass   127.0.0.1:9000;

  32.    #    fastcgi_index  index.php;

  33.    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;

  34.    #    include        fastcgi_params;

  35.    #}

  36.    # deny access to .htaccess files, if Apache's document root

  37.    # concurs with nginx's one

  38.    #

  39.    #location ~ /\.ht {

  40.    #    deny  all;

12 添加心跳检测文件

  
    
    
  
  1. vim nginx_check.sh

  2. #以下是脚本内容

  3. #!/bin/bash

  4. A=`ps -C nginx –no-header |wc -l`

  5. if[ $A -eq 0];then

  6.    /usr/local/nginx/sbin/nginx

  7.    sleep 2

  8.    if[ `ps -C nginx --no-header |wc -l`-eq 0];then

  9.        killall keepalived

  10.    fi

  11. fi

13 给脚本赋予执行权限

  
    
    
  
  1. chmod +x nginx_check.sh

14 设置开机启动

  
    
    
  
  1. systemctl enable keepalived.service

  2. #开启keepalived

  3. systemctl daemon-reload

  4. systemctl start keepalived.service

15 检测虚拟IP是否成功

  
    
    
  
  1. ping 172.18.0.210

16 将centos_temp 容器重新打包成镜像

  
    
    
  
  1. docker commit -a 'cfh'-m 'centos with keepalived nginx' centos_temp centos_kn

17 删除所有容器

  
    
    
  
  1. docker rm -f `docker ps -a -q`

18 使用之前打包的镜像重新创建容器

取名为centos_web_master,和centos_web_slave

  
    
    
  
  1. docker run --privileged  -tid \

  2. --name centos_web_master --restart=always \

  3. --net mynet --ip 172.18.0.201 \

  4. centos_kn /usr/sbin/init

  5. docker run --privileged  -tid \

  6. --name centos_web_slave --restart=always \

  7. --net mynet --ip 172.18.0.202 \

  8. centos_kn /usr/sbin/init

19 修改centos_web_slave里面的nginx和keepalived的配置文件

keepalived修改地方如下

  
    
    
  
  1. state SLAVE #设置为从服务器

  2. mcast_src_ip 172.18.0.202  #修改为本机的IP

  3. priority 80  #权重设置比master小

Nginx配置如下

  
    
    
  
  1. upstream tomcat{

  2.  server 172.18.0.14:80;

  3.  server 172.18.0.15:80;

  4.  server 172.18.0.16:80;

  5. }

  6. server {

  7.    listen       80;

  8.    server_name  172.18.0.210;

  9.    #charset koi8-r;

  10.    #access_log  /var/log/nginx/host.access.log  main;

  11.    location / {

  12.        proxy_pass http://tomcat;

  13.        index index.html index.html;

  14.    }

  15.    #error_page  404              /404.html;

  16.    # redirect server error pages to the static page /50x.html

  17.    #

  18.    error_page   500502503504  /50x.html;

  19.    location = /50x.html {

  20.        root   /usr/share/nginx/html;

  21.    }

  22.    # proxy the PHP scripts to Apache listening on 127.0.0.1:80

  23.    #

  24.    #location ~ \.php$ {

  25.    #    proxy_pass   http://127.0.0.1;

  26.    #}

  27.    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

  28.    #

  29.    #location ~ \.php$ {

  30.    #    root           html;

  31.    #    fastcgi_pass   127.0.0.1:9000;

  32.    #    fastcgi_index  index.php;

  33.    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;

  34.    #    include        fastcgi_params;

  35.    #}

  36.    # deny access to .htaccess files, if Apache's document root

  37.    # concurs with nginx's one

  38.    #

  39.    #location ~ /\.ht {

  40.    #    deny  all;

重启keepalived和nginx

  
    
    
  
  1. systemctl daemon-reload

  2. systemctl restart keepalived.service

  3. systemctl restart nginx.service

20 使用Nginx启动6台前端服务器

  
    
    
  
  1. docker pull nginx

  2. nginx_web_1='/home/root123/cfh/nginx1'

  3. nginx_web_2='/home/root123/cfh/nginx2'

  4. nginx_web_3='/home/root123/cfh/nginx3'

  5. nginx_web_4='/home/root123/cfh/nginx4'

  6. nginx_web_5='/home/root123/cfh/nginx5'

  7. nginx_web_6='/home/root123/cfh/nginx6'

  8. mkdir -p ${nginx_web_1}/conf ${nginx_web_1}/conf.d ${nginx_web_1}/html ${nginx_web_1}/logs

  9. mkdir -p ${nginx_web_2}/conf ${nginx_web_2}/conf.d ${nginx_web_2}/html ${nginx_web_2}/logs

  10. mkdir -p ${nginx_web_3}/conf ${nginx_web_3}/conf.d ${nginx_web_3}/html ${nginx_web_3}/logs

  11. mkdir -p ${nginx_web_4}/conf ${nginx_web_4}/conf.d ${nginx_web_4}/html ${nginx_web_4}/logs

  12. mkdir -p ${nginx_web_5}/conf ${nginx_web_5}/conf.d ${nginx_web_5}/html ${nginx_web_5}/logs

  13. mkdir -p ${nginx_web_6}/conf ${nginx_web_6}/conf.d ${nginx_web_6}/html ${nginx_web_6}/logs

  14. docker run -it --name temp_nginx -d nginx

  15. docker ps

  16. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_1}/conf

  17. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_1}/conf.d/default.conf

  18. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_2}/conf

  19. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_2}/conf.d/default.conf

  20. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_3}/conf

  21. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_3}/conf.d/default.conf

  22. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_4}/conf

  23. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_4}/conf.d/default.conf

  24. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_5}/conf

  25. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_5}/conf.d/default.conf

  26. docker cp temp_nginx:/etc/nginx/nginx.conf ${nginx_web_6}/conf

  27. docker cp temp_nginx:/etc/nginx/conf.d/default.conf  ${nginx_web_6}/conf.d/default.conf

  28. docker rm -f temp_nginx

  29. docker run -d  --name nginx_web_1 \

  30. --network=mynet --ip 172.18.0.11 \

  31. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  32. -v ${nginx_web_1}/html/:/usr/share/nginx/html \

  33. -v ${nginx_web_1}/conf/nginx.conf:/etc/nginx/nginx.conf \

  34. -v ${nginx_web_1}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  35. -v ${nginx_web_1}/logs/:/var/log/nginx --privileged --restart=always nginx

  36. docker run -d  --name nginx_web_2 \

  37. --network=mynet --ip 172.18.0.12 \

  38. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  39. -v ${nginx_web_2}/html/:/usr/share/nginx/html \

  40. -v ${nginx_web_2}/conf/nginx.conf:/etc/nginx/nginx.conf \

  41. -v ${nginx_web_2}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  42. -v ${nginx_web_2}/logs/:/var/log/nginx --privileged --restart=always nginx

  43. docker run -d  --name nginx_web_3 \

  44. --network=mynet --ip 172.18.0.13 \

  45. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  46. -v ${nginx_web_3}/html/:/usr/share/nginx/html \

  47. -v ${nginx_web_3}/conf/nginx.conf:/etc/nginx/nginx.conf \

  48. -v ${nginx_web_3}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  49. -v ${nginx_web_3}/logs/:/var/log/nginx --privileged --restart=always nginx

  50. docker run -d  --name nginx_web_4 \

  51. --network=mynet --ip 172.18.0.14 \

  52. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  53. -v ${nginx_web_4}/html/:/usr/share/nginx/html \

  54. -v ${nginx_web_4}/conf/nginx.conf:/etc/nginx/nginx.conf \

  55. -v ${nginx_web_4}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  56. -v ${nginx_web_4}/logs/:/var/log/nginx --privileged --restart=always nginx

  57. docker run -d  --name nginx_web_5 \

  58. --network=mynet --ip 172.18.0.15 \

  59. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  60. -v ${nginx_web_5}/html/:/usr/share/nginx/html \

  61. -v ${nginx_web_5}/conf/nginx.conf:/etc/nginx/nginx.conf \

  62. -v ${nginx_web_5}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  63. -v ${nginx_web_5}/logs/:/var/log/nginx --privileged --restart=always nginx

  64. docker run -d  --name nginx_web_6 \

  65. --network=mynet --ip 172.18.0.16 \

  66. -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai \

  67. -v ${nginx_web_6}/html/:/usr/share/nginx/html \

  68. -v ${nginx_web_6}/conf/nginx.conf:/etc/nginx/nginx.conf \

  69. -v ${nginx_web_6}/conf.d/default.conf:/etc/nginx/conf.d/default.conf \

  70. -v ${nginx_web_6}/logs/:/var/log/nginx --privileged --restart=always nginx

  71. cd ${nginx_web_1}/html

  72. cp /home/server/envconf/index.html ${nginx_web_1}/html/index.html

  73. cd ${nginx_web_2}/html

  74. cp /home/server/envconf/index.html ${nginx_web_2}/html/index.html

  75. cd ${nginx_web_3}/html

  76. cp /home/server/envconf/index.html ${nginx_web_3}/html/index.html

  77. cd ${nginx_web_4}/html

  78. cp /home/server/envconf/index.html ${nginx_web_4}/html/index.html

  79. cd ${nginx_web_5}/html

  80. cp /home/server/envconf/index.html ${nginx_web_5}/html/index.html

  81. cd ${nginx_web_6}/html

  82. cp /home/server/envconf/index.html ${ngi

/home/server/envconf/ 是我自己存放文件的地方,读者可自行新建目录,下面附上index.html文件内容

  
    
    
  
  1. <!DOCTYPE html>

  2. <htmllang="en"xmlns:v-on="http://www.w3.org/1999/xhtml">

  3. <head>

  4.    <metacharset="UTF-8">

  5.    <title>主从测试</title>

  6. </head>

  7. <scriptsrc="https://cdn.jsdelivr.net/npm/vue"></script>

  8. <scriptsrc="https://cdn.staticfile.org/vue-resource/1.5.1/vue-resource.min.js"></script>

  9. <body>

  10. <divid="app"style="height: 300px;width: 600px">

  11.    <h1style="color: red">我是前端工程 WEB 页面</h1>

  12.    <br>

  13.    showMsg:{{message}}

  14.    <br>

  15.    <br>

  16.    <br>

  17.    <buttonv-on:click="getMsg">获取后台数据</button>

  18. </div>

  19. </body>

  20. </html>

  21. <script>

  22.    var app = new Vue({

  23.        el: '#app',

  24.        data: {

  25.            message: 'Hello Vue!'

  26.        },

  27.        methods: {

  28.            getMsg: function () {

  29.                var ip="http://192.168.227.99"

  30.                var that=this;

  31.                //发送get请求

  32.                that.$http.get(ip+'/api/test').then(function(res){

  33.                   that.message=res.data;

  34.                },function(){

  35.                    console.log('请求失败处理');

  36.                });

  37. ;            }

  38.        }

  39.    })

  40. </

21 浏览器访问 192.168.227.88,会看到index.html显示的界面。

22 测试

Docker容器实现后端主从热备系统

1 创建Dockerfile文件

  
    
    
  
  1. FROM openjdk:10

  2. MAINTAINER cfh

  3. WORKDIR /home/soft

  4. CMD ["nohup","java","-jar","docker_server.jar"]

2 构建镜像

  
    
    
  
  1. docker build -t myopenjdk .

3 使用构建的镜像创建6台后端服务器

  
    
    
  
  1. docker volume create S1

  2. docker volume inspect S1

  3. docker volume create S2

  4. docker volume inspect S2

  5. docker volume create S3

  6. docker volume inspect S3

  7. docker volume create S4

  8. docker volume inspect S4

  9. docker volume create S5

  10. docker volume inspect S5

  11. docker volume create S6

  12. docker volume inspect S6

  13. cd /var/lib/docker/volumes/S1/_data

  14. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S1/_data/docker_server.jar

  15. cd /var/lib/docker/volumes/S2/_data

  16. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S2/_data/docker_server.jar

  17. cd /var/lib/docker/volumes/S3/_data

  18. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S3/_data/docker_server.jar

  19. cd /var/lib/docker/volumes/S4/_data

  20. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S4/_data/docker_server.jar

  21. cd /var/lib/docker/volumes/S5/_data

  22. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S5/_data/docker_server.jar

  23. cd /var/lib/docker/volumes/S6/_data

  24. cp /home/server/envconf/docker_server.jar /var/lib/docker/volumes/S6/_data/docker_server.jar

  25. docker run -it -d --name server_1  -v S1:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.101--restart=always myopenjdk

  26. docker run -it -d --name server_2  -v S2:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.102--restart=always myopenjdk

  27. docker run -it -d --name server_3  -v S3:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.103--restart=always myopenjdk

  28. docker run -it -d --name server_4  -v S4:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.104--restart=always myopenjdk

  29. docker run -it -d --name server_5  -v S5:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.105--restart=always myopenjdk

  30. docker run -it -d --name server_6  -v S6:/home/soft  -v /etc/localtime:/etc/localtime -e TZ=Asia/Shanghai--net mynet --ip 172.18.0.106--restar

docker_server.jar为测试程序,主要代码如下

  
    
    
  
  1. import org.springframework.web.bind.annotation.RestController;

  2. import javax.servlet.http.HttpServletResponse;

  3. import java.util.LinkedHashMap;

  4. import java.util.Map;

  5. @RestController

  6. @RequestMapping("api")

  7. @CrossOrigin("*")

  8. publicclassTestController{

  9.    @Value("${server.port}")

  10.    publicint port;

  11.    @RequestMapping(value = "/test",method = RequestMethod.GET)

  12.    publicMap<String,Object> test(HttpServletResponse response){

  13.        response.setHeader("Access-Control-Allow-Origin", "*");

  14.        response.setHeader("Access-Control-Allow-Methods", "GET");

  15.        response.setHeader("Access-Control-Allow-Headers","token");

  16.        Map<String,Object> objectMap=newLinkedHashMap<>();

  17.        objectMap.put("code",10000);

  18.        objectMap.put("msg","ok");

  19.        objectMap.put("server_port","服务器端口:"+port);

  20.        return objectMap;

  21.    }

4 创建后端的主从容器

主服务器

  
    
    
  
  1. docker run --privileged  -tid --name centos_server_master --restart=always --net mynet --ip 172.18.0.203 centos_kn /usr/sbin/init

主服务器keepalived配置

  
    
    
  
  1. vrrp_script chk_nginx {

  2.    script "/etc/keepalived/nginx_check.sh"

  3.    interval 2

  4.    weight -20

  5. }

  6. vrrp_instance VI_1 {

  7.    state MASTER

  8.    interface eth0

  9.    virtual_router_id 110

  10.    mcast_src_ip 172.18.0.203

  11.    priority 100

  12.    nopreempt

  13.    advert_int 1

  14.    authentication {

  15.        auth_type PASS

  16.        auth_pass 1111

  17.    }

  18.    track_script {

  19.        chk_nginx

  20.    }

  21.    virtual_ipaddress {

  22.        172.18.0.220

  23.    }

主服务器nginx配置

  
    
    
  
  1. upstream tomcat{

  2.  server 172.18.0.101:6001;

  3.  server 172.18.0.102:6002;

  4.  server 172.18.0.103:6003;

  5. }

  6. server {

  7.    listen       80;

  8.    server_name  172.18.0.220;

  9.    #charset koi8-r;

  10.    #access_log  /var/log/nginx/host.access.log  main;

  11.    location / {

  12.        proxy_pass http://tomcat;

  13.        index index.html index.html;

  14.    }

  15.    #error_page  404              /404.html;

  16.    # redirect server error pages to the static page /50x.html

  17.    #

  18.    error_page   500502503504  /50x.html;

  19.    location = /50x.html {

  20.        root   /usr/share/nginx/html;

  21.    }

  22.    # proxy the PHP scripts to Apache listening on 127.0.0.1:80

  23.    #

  24.    #location ~ \.php$ {

  25.    #    proxy_pass   http://127.0.0.1;

  26.    #}

  27.    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

  28.    #

  29.    #location ~ \.php$ {

  30.    #    root           html;

  31.    #    fastcgi_pass   127.0.0.1:9000;

  32.    #    fastcgi_index  index.php;

  33.    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;

  34.    #    include        fastcgi_params;

  35.    #}

  36.    # deny access to .htaccess files, if Apache's document root

  37.    # concurs with nginx's one

  38.    #

  39.    #location ~ /\.ht {

  40.    #    deny  all;

重启keepalived和nginx

  
    
    
  
  1. systemctl daemon-reload

  2. systemctl restart keepalived.service

  3. systemctl restart nginx.service

从服务器

  
    
    
  
  1. docker run --privileged  -tid --name centos_server_slave --restart=always --net mynet --ip 172.18.0.204 centos_kn /usr/sbin/init

从服务器的keepalived配置

  
    
    
  
  1. cript chk_nginx {

  2.    script "/etc/keepalived/nginx_check.sh"

  3.    interval 2

  4.    weight -20

  5. }

  6. vrrp_instance VI_1 {

  7.    state SLAVE

  8.    interface eth0

  9.    virtual_router_id 110

  10.    mcast_src_ip 172.18.0.204

  11.    priority 80

  12.    nopreempt

  13.    advert_int 1

  14.    authentication {

  15.        auth_type PASS

  16.        auth_pass 1111

  17.    }

  18.    track_script {

  19.        chk_nginx

  20.    }

  21.    virtual_ipaddress {

  22.        172.18.0.220

  23.    }

从服务器的nginx配置

  
    
    
  
  1. upstream tomcat{

  2.  server 172.18.0.104:6004;

  3.  server 172.18.0.105:6005;

  4.  server 172.18.0.106:6006;

  5. }

  6. server {

  7.    listen       80;

  8.    server_name  172.18.0.220;

  9.    #charset koi8-r;

  10.    #access_log  /var/log/nginx/host.access.log  main;

  11.    location / {

  12.        proxy_pass http://tomcat;

  13.        index index.html index.html;

  14.    }

  15.    #error_page  404              /404.html;

  16.    # redirect server error pages to the static page /50x.html

  17.    #

  18.    error_page   500502503504  /50x.html;

  19.    location = /50x.html {

  20.        root   /usr/share/nginx/html;

  21.    }

  22.    # proxy the PHP scripts to Apache listening on 127.0.0.1:80

  23.    #

  24.    #location ~ \.php$ {

  25.    #    proxy_pass   http://127.0.0.1;

  26.    #}

  27.    # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000

  28.    #

  29.    #location ~ \.php$ {

  30.    #    root           html;

  31.    #    fastcgi_pass   127.0.0.1:9000;

  32.    #    fastcgi_index  index.php;

  33.    #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;

  34.    #    include        fastcgi_params;

  35.    #}

  36.    # deny access to .htaccess files, if Apache's document root

  37.    # concurs with nginx's one

  38.    #

  39.    #location ~ /\.ht {

  40.    #    deny  all;

重启keepalived和nginx

  
    
    
  
  1. systemctl daemon-reload

  2. systemctl restart keepalived.service

  3. systemctl restart nginx.service

命令行验证

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

浏览器验证

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

portainer安装

它是容器管理界面,可以看到容器的运行状态

  
    
    
  
  1. docker search portainer

  2. docker pull portainer/portainer

  3. docker run -d -p 9000:9000 \

  4.    --restart=always \

  5.    -v /var/run/docker.sock:/var/run/docker.sock \

  6.    --name prtainer-eureka\

  7.    portainer/portainer

  8. http://192.168.227.171:90

首次进入的时候需要输入密码,默认账号为admin,密码创建之后页面跳转到下一界面,选择管理本地的容器,也就是Local,点击确定。

2万字,实战 Docker 部署:完整的前后端,主从热备高可用服务!!

结语:

另外Docker的三要素要搞明白:镜像/容器,数据卷,网络管理。

文章永久链接:

https://tech.souyunku.com/?p=43138

阅读原文: 最新 3625页大厂面试题