原创文章,转载请注明: 转载自慢慢的回味
本文链接地址: lvs+keepalived实现高可用集群
lvs是一个开源的软件,由毕业于国防科技大学的章文嵩博士于1998年5月创立(中国人的项目),可以实现LINUX平台下的简单负载均衡。LVS是Linux Virtual Server的缩写,意思是Linux虚拟服务器。本文将介绍lvs结合keepalived实现一个高科用的Linux群集系统.
lvs有三种工作模式NAT(地址转换),IP Tunneling(IP隧道)、Direct Routing(直接路由)。
NAT模式:当用户请求到达调度器时,调度器将请求报文的目标地址(即虚拟IP地址)改写成选定的Real Server地址,同时报文的目标端口也改成选定的Real Server的相应端口,最后将报文请求发送到选定的Real Server。在服务器端得到数据后,Real Server返回数据给用户时,需要再次经过负载调度器将报文的源地址和源端口改成虚拟IP地址和相应端口,然后把数据发送给用户,完成整个负载调度过程。工作效率最低的是NAT模式,但NAT模式可以用于各种系统,各种环境的负载均衡,只需要一个公网ip即可实现。
IP Tunneling模式:IP Tunneling模式调度器将连接分发到不同的后端real server,然后由real server处理请求直接相应给用户,大大提高了调度器的调度效率,后端real server没有物理位置和逻辑关系的限制,后端real server可以在Lan/Wlan,但是后端real server必须支持IP隧道协议.
DR(Direct Routing)模式:DR(Direct Routing)是效率最高的,与IP Tunneling类似,通过改写请求报文的MAC地址,将请求给后端real server,然后由real server处理请求直接相应给用户,Direct Routing与IP Tunneling相比,没有IP封装的开销,但由于采用物理层,所以DR模式的调度器和后端real server必须在一个物理网段里,中间不能过路由器(也就是一个交换机相连).
lvs支持8种不同的调度算法轮叫(rr)、加权轮叫(wrr)、最小连接(lc)、加权最小连接(wlc)、基于局部性最小连接(lblc)、带复制的基于局部性最少链接(lblcr)、目标地址散列(dh)和源地址散列(sh).
下面就介绍如何来安装和配置lvs+keepalived
本文使用环境:
操作系统:CentOS 6.4 64bit
主调度器:192.168.110.51/24
备调度器:192.168.110.52/24
vip(virtual ip):192.168.110.50/24
后端real server: 192.168.110.61/24 | 192.168.110.62/24
lvs在2.6的内核中是默认支持的,所以我们就不需要在来安装,但是我们需要安装用户配置工具ipvsadm
yum -y install ipvsadm # 分别在主从lvs上执行安装ipvsadm |
我们查看lvs是否支持:
lsmod |grep ip_vs ip_vs_wlc 1241 2 ip_vs 115643 4 ip_vs_wlc libcrc32c 1246 1 ip_vs ipv6 321422 78 ip_vs,ip6t_REJECT,nf_conntrack_ipv6,nf_defrag_ipv6 modprobe -l|grep ip_vs kernel/net/netfilter/ipvs/ip_vs.ko kernel/net/netfilter/ipvs/ip_vs_rr.ko kernel/net/netfilter/ipvs/ip_vs_wrr.ko kernel/net/netfilter/ipvs/ip_vs_lc.ko kernel/net/netfilter/ipvs/ip_vs_wlc.ko kernel/net/netfilter/ipvs/ip_vs_lblc.ko kernel/net/netfilter/ipvs/ip_vs_lblcr.ko kernel/net/netfilter/ipvs/ip_vs_dh.ko kernel/net/netfilter/ipvs/ip_vs_sh.ko kernel/net/netfilter/ipvs/ip_vs_sed.ko kernel/net/netfilter/ipvs/ip_vs_nq.ko kernel/net/netfilter/ipvs/ip_vs_ftp.ko |
本文介绍lvs的DR
模式,首先部署keepalived.本博前面已经介绍如何来安装keepalived.这里就不在只简单的贴一下步骤:
在主备服务器上部署keepalived(因为前面已经rpm包安装了ipvsadm,所以就不需要重复安装):
vi /etc/sysctl.conf net.ipv4.ip_forward = 1 # 此参数改为1,NAT模式必须 sysctl -p # 使修改生效 |
安装依赖:
yum -y install openssl-devel # 下载并安装keepalived wget http://www.keepalived.org/software/keepalived-1.1.19.tar.gz tar -zxvf keepalived-1.1.19.tar.gz cd keepalived-1.1.19 ./configure --prefix=/ \ # 安装在默认位置(配置文件,二进制文件,启动脚本放到默认位置) --with-kernel-dir=/usr/src/kernels/2.6.32/ # 需要内核的头文件,如果没有可以去www.kernel.org下载,没有头文件无法使用VPS功能 make && make install |
在主备lvs上安装keepalived完毕后我们先来配置主lvs上的keepalived:
编辑配置文件/etc/keepalived/keepalived.conf:
! Configuration File for keepalived global_defs { notification_email { xxxx@163.com # 发生故障时发送的邮箱 } notification_email_from DR1@192.168.110.51 # 使用哪个邮箱发送 smtp_server 192.168.110.51 smtp_connect_timeout 30 router_id LVS_DEVEL } vrrp_instance VI_1 { state MASTER # 标示为主lvs interface eth0 # HA检测端口 virtual_router_id 51 # 主备的virtual_router_id 必须相同 priority 100 # 优先级,备lvs要比主lvs稍小 advert_int 1 authentication { auth_type PASS auth_pass 1111 } virtual_ipaddress { 192.168.110.50 # 多个vip可换行添加 } } virtual_server 192.168.110.50 5222 { delay_loop 6 # 每隔6秒查看realserver状态 lb_algo wlc # 调度算法为加权最小连接数 lb_kind DR # lvs工作模式为DR(直接路由)模式 nat_mask 255.255.255.0 persistence_timeout 50 # 持久时间(秒),同一个TCP连接如果50秒没有通信包将重新进行连接转发选择,用于保持Session使用 protocol TCP # 用TCP监测realserver的状态 real_server 192.168.110.61 5222 { weight 3 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 5222 } } real_server 192.168.110.62 5222 { weight 3 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 5222 } } } virtual_server 192.168.110.50 9090 { delay_loop 6 lb_algo wlc lb_kind DR nat_mask 255.255.255.0 persistence_timeout 50 protocol TCP real_server 192.168.110.61 9090 { weight 3 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 9090 } } real_server 192.168.110.62 9090 { weight 3 TCP_CHECK { connect_timeout 3 nb_get_retry 3 delay_before_retry 3 connect_port 9090 } } } |
配置备用lvs的keepalived,只需要将state MASTER 改为state BACKUP,降低priority 100 的值:
由于使用keepalived就不需要使用脚本来配置lvs调度器,但是这里我们还是会给出一个脚本内容,但我们不会用到这个脚本:lvs已经内置于内核,配置命令是ipvsadm,所以lvs的一些操作是通过ipvsadm来控制.下面我们就编写脚本来实现lvs的DR模式:
编写脚本lvsdr:
我们把vi /etc/init.d/lvsdr添加如下内容
#!/bin/sh # 定义虚拟ip VIP=192.168.110.50 # 定义realserver,并已逗号分开 RIPS=192.168.110.51,192.168.110.52 # 定义提供服务的端口 SERVICES=5222,9090 # 调用init.d脚本的标准库 . /etc/rc.d/init.d/functions case $1 in start) echo "Start LVS of DR Mode" # lvs dr模式不需要路由转发,但是keepalived需要 #echo "0" > /proc/sys/net/ipv4/ip_forward # 开启icmp包重定向 echo "1" > /proc/sys/net/ipv4/conf/all/send_redirects echo "1" > /proc/sys/net/ipv4/conf/default/send_redirects echo "1" > /proc/sys/net/ipv4/conf/eth0/send_redirects # 绑定虚拟ip ifconfig eth0:0 $VIP broadcast $VIP netmask 255.255.255.255 up route add -host $VIP dev eth0:0 # 清除lvs规则 ipvsadm -C # 添加一条虚拟服务器记录 # -p指定一定的时间内将相同的客户端分配到同一台后端服务器 # 用于解决session的问题,测试时或有别的解决方案时建议去掉 ipvsadm -A -t $VIP:$SERVICE -s wlc -p # 添加真实服务器记录 for RIP in `echo $RIPS ¦sed -e 's/,/\n/g'` do for SERVICE in `echo $SERVICES ¦sed -e 's/,/\n/g'` do ipvsadm -a -t $VIP:$SERVICE -r $RIP:$SERVICE -g -w 3 done done # 设置tcp tcpfin udp的超时连接值 ipvsadm --set 30 120 300 ipvsadm ;; stop) echo "Stop LVS DR" ifconfig eth0:0 down ipvsadm -C ;; *) echo "Usage:$0 {start ¦ stop}" exit 1 esac |
编辑完毕保存退出,然后给这个脚本执行权限,然后就可以通过service命令来启动lvs dr模式:
chmod +x /etc/init.d/lvsdr service lvsdr start |
将这个脚本分别放到主备lvs的/etc/init.d/下,赋予执行权限.
我们真正需要的是realserver的脚本,下面我们来编写realserver脚本,同样放在/etc/init.d/下,编辑rs脚本:
vi /etc/init.d/lvsrs #!/bin/sh VIP=192.168.110.50 . /etc/rc.d/init.d/functions case $1 in start) echo "lo:0 port starting" # 为了相应lvs调度器转发过来的包,需在本地lo接口上绑定vip ifconfig lo:0 $VIP broadcast $VIP netmask 255.255.255.255 up # 限制arp请求 echo "1" > /proc/sys/net/ipv4/conf/lo/arp_ignore echo "2" > /proc/sys/net/ipv4/conf/lo/arp_announce echo "1" > /proc/sys/net/ipv4/conf/all/arp_ignore echo "2" > /proc/sys/net/ipv4/conf/all/arp_announce ;; stop) echo "lo:0 port closing" ifconfig lo:0 down echo "0" > /proc/sys/net/ipv4/conf/lo/arp_ignore echo "0" > /proc/sys/net/ipv4/conf/lo/arp_announce echo "0" > /proc/sys/net/ipv4/conf/all/arp_ignore echo "0" > /proc/sys/net/ipv4/conf/all/arp_announce ;; *) echo "Usage: $0 {start ¦ stop}" exit 1 esac |
1 给脚本赋予执行权限并启动服务
chmod +x /etc/init.d/lvsrs service lvsrs start |
2 在主调度器上启动keepalived:
service keepalived start |
查看日志文件:
May 19 17:25:26 CentOSVS1 Keepalived: Starting Keepalived v1.1.19 (05/16,2014) May 19 17:25:26 CentOSVS1 Keepalived: Starting Healthcheck child process, pid=16605 May 19 17:25:26 CentOSVS1 Keepalived: Starting VRRP child process, pid=16606 May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Registering Kernel netlink reflector May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Registering Kernel netlink command channel May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Registering gratutious ARP shared channel May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Registering Kernel netlink reflector May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Registering Kernel netlink command channel May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'. May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Configuration is using : 62941 Bytes May 19 17:25:26 CentOSVS1 Keepalived_vrrp: Using LinkWatch kernel netlink reflector... May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'. May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Configuration is using : 19106 Bytes May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector... May 19 17:25:26 CentOSVS1 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(9,10)] May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.61:5222] May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.62:5222] May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.61:9090] May 19 17:25:26 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.62:9090] May 19 17:25:26 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATE May 19 17:25:27 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE May 19 17:25:27 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs. May 19 17:25:27 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.110.50 May 19 17:25:32 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.110.50 |
3 然后在备用调度器上启动keepalived然后查看日志:
May 19 18:46:49 CentOSVS1 Keepalived: Starting Keepalived v1.1.19 (05/19,2014) May 19 18:46:49 CentOSVS1 Keepalived: Starting Healthcheck child process, pid=13034 May 19 18:46:49 CentOSVS1 Keepalived: Starting VRRP child process, pid=13035 May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Netlink reflector reports IP 192.168.110.52 added May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Netlink reflector reports IP 192.168.110.100 added May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Registering Kernel netlink reflector May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Registering Kernel netlink command channel May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Registering gratutious ARP shared channel May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.110.52 added May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.110.100 added May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Registering Kernel netlink reflector May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Registering Kernel netlink command channel May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Opening file '/etc/keepalived/keepalived.conf'. May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Configuration is using : 19104 Bytes May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Using LinkWatch kernel netlink reflector... May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Opening file '/etc/keepalived/keepalived.conf'. May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Configuration is using : 63031 Bytes May 19 18:46:49 CentOSVS1 Keepalived_vrrp: Using LinkWatch kernel netlink reflector... May 19 18:46:49 CentOSVS1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.61:5222] May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.62:5222] May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.61:9090] May 19 18:46:49 CentOSVS1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.110.62:9090] May 19 18:46:49 CentOSVS1 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(10,11)] |
然后我们使用ipvsadm命令查看(在此之前要确认后端realserver已经启动了web服务):
[root@CentOSVS1 ~]# ipvsadm IP Virtual Server version 1.2.1 (size=4096) Prot LocalAddress:Port Scheduler Flags -> RemoteAddress:Port Forward Weight ActiveConn InActConn TCP 192.168.110.50:xmpp-client wlc persistent 50 -> 192.168.110.61:xmpp-client Route 3 0 0 -> 192.168.110.62:xmpp-client Route 3 1 0 TCP 192.168.110.50:websm wlc persistent 50 -> 192.168.110.61:websm Route 3 0 0 -> 192.168.110.62:websm Route 3 6 0 [root@CentOSVS1 ~]# ipvsadm -lcn IPVS connection entries pro expire state source virtual destination TCP 14:53 ESTABLISHED 192.168.110.1:57303 192.168.110.50:9090 192.168.110.62:9090 TCP 14:43 ESTABLISHED 192.168.110.1:55528 192.168.110.50:5222 192.168.110.62:5222 TCP 00:42 NONE 192.168.110.1:0 192.168.110.50:9090 192.168.110.62:9090 TCP 00:43 NONE 192.168.110.1:0 192.168.110.50:5222 192.168.110.62:5222 TCP 14:52 ESTABLISHED 192.168.110.1:57307 192.168.110.50:9090 192.168.110.62:9090 TCP 14:52 ESTABLISHED 192.168.110.1:57306 192.168.110.50:9090 192.168.110.62:9090 TCP 14:53 ESTABLISHED 192.168.110.1:57302 192.168.110.50:9090 192.168.110.62:9090 TCP 14:53 ESTABLISHED 192.168.110.1:57304 192.168.110.50:9090 192.168.110.62:9090 TCP 14:52 ESTABLISHED 192.168.110.1:57305 192.168.110.50:9090 192.168.110.62:9090 |
参考文章本作品采用知识共享署名 4.0 国际许可协议进行许可。