lvs/dr+keepalived应用测试实施文档

简介:

lvs/dr+keepalived应用测试实施文档  

原文:http://beacon.blog.51cto.com/442731/94723


     声明:本测试文档只针对未曾使用过lvs+keepalived的技术爱好者,对于文中错误希望读过此文档的朋友斧正。文档冗长的原因是包含了很多细节,可能有些并无必要说明的我也做了详细的解释。基于以上原因,敬请高人略过此文档。 未完,待续........ 


一、测试名称   

lvs/dr模式下keepalived应用的测试

二、测试目的

目前有1台lvs/dr转发用户请求到后端web服务器,转发通过脚本控制。有两个问题需要解决,

⑴、后端web服务器故障不能提供web服务时lvs/dr仍然向其转发请求;

⑵、当lvs/dr故障时,将完全中断web服务。故需要使用软件实现自动增加删除故障web结点,同时实现lvs/dr的双机主备机制。

三、测试环境

1、本测试中使用4台服务器,包括2台lvs/dr和2台web/server。具体情况见下表,

服务器名称 服务器网络配置

真实IP地址(RIP)
lvs-dr1 (dr1) 192.168.1.210
lvs-dr2 (dr2) 192.168.1.211
lvs-web1 (web1) 192.168.1.216
lvs-web2 (web2) 192.168.1.217

虚拟IP地址(VIP)
web VIP1 192.168.1.215
web VIP2 192.168.1.220

 2、当前状况如下图所示


⑴、用户访问网站的请求首先到达lvs-dr1并通过脚本设置判断如何转发用户请求。

⑵、通过转发的用户请求被lvs-web1和lvs-web2响应后直接返回给客户端。但是这样的方式可能出现如下问题,

⑴、当lvs-dr1故障时,无法再接受用户请求并将请求转发给真实的web服务器(即便真实web服务器正常)从而导致整个web服务的瘫痪,也就是lvs控制器存在单点故障问题。

⑵、当lvs-dr1正常时,真实地web服务器如lvs-web1故障。此时lvs-dr1并不知道真实服务器是否在正常提供web服务,所以仍然在向故障的lvs-web1转发用户请求。这样的结果是用户请求无法被故障web服务器相应,某些用户可以访问网站有些则无法访问。

注:服务器故障包括:服务器宕机、web服务终止、网线松动等等。

 3、基于以上的问题,我们需要想办法实现对lvs控制器和web服务器的健康监测,一旦服务出现问题能保证服务不中断的情况下排除故障。即增加lvs控制器实现主备模式避免单点故障以及自动删除故障web服务结点并当它恢复后再自动添加到群集中这样的功能。 预期状况如下图所示


通过上图所示实现以下功能

⑴、lvs-dr1和lvs-dr2采用主从的方式配置,从控制器通过心跳监测主控制器是否存在。当监测到主控制器不存在时接管虚拟IP实现向web服务器转发用户请求。

⑵、lvs控制器监控web服务器是否存在,当服务器出现故障(宕机、服务终止、网线松动)时自动删除故障结点,当服务器上线恢复提供服务后又能够自动将该结点添加到群集中。

4、根据以上对现状和预期的分析后,为了测试我们做了如下准备。

服务器名称 安装软件
lvs-dr1 ipvsadm-1.24  keepalived-1.1.15
lvs-dr2
lvs-web1 cronolog-1.6.2httpd-2.2.9
lvs-web2

 注:

⑴、由于整个测试实施文档在非生产环境下完成,所以使用虚拟机软件VMWare Workstation 5.5.1作为以上所有结点的运行平台。

⑵、测试环境中所有结点的操作系统是CentOS 5.1 /CentOS release 5 (Final),内核版本为2.6.18-53.el5。如何查看系统和内核版本?

①、/etc/issue


②、uname -a


③、/proc/version


⑶、实现lvs群集的两个重要部件是ipvs内核模块和ipvsadm工具包。当前内核版本的系统已经包含ipvs内核模块,但默认并没有加载到内核中,可以手工加载或安装ipvsadm之后会被加载。使用modprobe命令手工加载ipvs模块并查询模块是否加载。


 ipvsadm下载地址[url]http://www.linuxvirtualserver.org/software/kernel-2.6/ipvsadm-1.24.tar.gz[/url]

⑷、keepalived是一个监测lvs转发器和web服务器状态的软件,下面是官方网站对keepalived的一段解释,官方网站地址[url]http://www.keepalived.org[/url]

What is Keepalived ?
The main goal of the keepalived project is to add a strong & robust keepalive facility to the Linux Virtual Server project. This project is written in C with multilayer TCP/IP stack checks. Keepalived implements a framework based on three family checks : Layer3, Layer4 & Layer5/7. This framework gives the daemon the ability of checking a LVS server pool states. When one of the server of the LVS server pool is down, keepalived informs the linux kernel via a setsockopt call to remove this server entrie from the LVS topology. In addition keepalived implements an independent VRRPv2 stack to handle director failover. So in short keepalived is a userspace daemon for LVS cluster nodes healthchecks and LVS directors failover.

Why using Keepalived ?
If your are using a LVS director to loadbalance a server pool in a production environnement, you may want to have a robust solution for healthcheck & failover. keepalived下载地址[url]http://www.keepalived.org/software/keepalived-1.1.15.tar.gz[/url]

⑸、web服务器上需要安装apache2,cronolog是选择安装软件,它可以格式化日志文件的格式,易于对apache日志的管理和分析。Apache2下载地址[url]http://apache.mirror.phpchina.com/httpd/httpd-2.2.9.tar.gz[/url]

Cronolog下载地址[url]http://cronolog.org/download/cronolog-1.6.2.tar.gz[/url] 

 四、测试步骤

1、配置各个结点的主机名、IP地址。配置过程暂省略,配置如下

服务器名称 服务器网络配置
lvs-dr1 (dr1) 192.168.1.210
lvs-dr2 (dr2) 192.168.1.211
lvs-web1 (web1) 192.168.1.216
lvs-web2 (web2) 192.168.1.217

 注:修改主机名需要/etc/sysconfig/network和/etc/hosts两个文件,然后重新启动生效。

2、在lvs-dr1和lvs-dr2上安装ipvsadm和keepalived,过程如下:

⑴、安装ipvsadm,从官方网站下载ipvsadm-1.24.tar.gz后

[root@lvs-dr1 ~]# tar zxvf ipvsadm-1.24.tar.gz  # 解压缩ipvsadm包 #

[root@lvs-dr1 ~]# ln -s /usr/src/kernels/2.6.18-53.el5-i686 /usr/src/linux # 做一个目录的链接 #

[root@lvs-dr1 ~]# ls -l /usr/src/  # 查看ln过的链接状态#

total 16drwxr-xr-x 3 root root 4096 Jun 24 17:28 kernels

lrwxrwxrwx 1 root root   35 Aug  3 18:35 linux -> /usr/src/kernels/2.6.18-53.el5-i686

drwxr-xr-x 7 root root 4096 Jun 24 17:27 redhat

[root@lvs-dr1 ~]# cd ipvsadm-1.24

[root@lvs-dr1 ipvsadm-1.24]make && make install  # 编译并安装 #

[root@lvs-dr1 ipvsadm-1.24]# find / -name ipvsadm  # 查看ipvsadm的位置 #

/etc/rc.d/init.d/ipvsadm/root/ipvsadm-1.24/ipvsadm/sbin/ipvsadm

 ⑵、安装keepalived,从官方网站下载keepalived-1.1.15.tar.gz后

[root@lvs-dr1 ~]# tar zxvf keepalived-1.1.15.tar.gz  # 解压缩keepalived包 #

[root@lvs-dr1 ~]# cd keepalived-1.1.15

[root@lvs-dr1 keepalived-1.1.15]# ./configure  # 最好按照默认配置参数不要指定prefix,可能会出现问题 #

……………省略……………………………………

Keepalived configuration------------------------

Keepalived version       :

1.1.15Compiler                 : 

gccCompiler flags           :

-g -O2Extra Lib                :

-lpopt -lssl -lcrypto Use IPVS Framework       :

YesIPVS sync daemon support : YesUse VRRP Framework       : Yes

Use LinkWatch            : No

Use Debug flags          : No

……………结束……………………………………

[root@lvs-dr1 keepalived-1.1.15]# make && make install  # 编译并安装 #

[root@lvs-dr1 keepalived-1.1.15]# find / -name keepalived  # 查看keepalived位置 #

/usr/local/etc/sysconfig/keepalived

/usr/local/etc/keepalived

/usr/local/etc/rc.d/init.d/keepalived

/usr/local/sbin/keepalived

/root/keepalived-1.1.15/bin/keepalived

/root/keepalived-1.1.15/keepalived

/root/keepalived-1.1.15/keepalived/etc/keepalived

 3、在lvs-web1和lvs-web2上安装并配置apache和cronolog。

⑴、安装cronolog,从官方网站下载cronolog-1.6.2.tar.gz后

[root@lvs-web1 ~]# tar zxvf cronolog-1.6.2.tar.gz

[root@lvs-web1 ~]# cd cronolog-1.6.2

[root@lvs- web1 cronolog-1.6.2]# ./configure

[root@lvs- web1 cronolog-1.6.2]# make

[root@lvs- web1 r cronolog-1.6.2]# make install

[root@lvs- web1 cronolog-1.6.2]# find / -name cronolog/usr/local/sbin/cronolog 

⑵、安装apache,从官方网站下载httpd-2.2.9.tar.gz后

①、添加用于启动httpd服务的用户和组

[root@lvs-web1 ~]# groupadd –g 500 hjw

[root@lvs-web1 ~]# useradd –g 500 –u 500 hjw

[root@lvs-web1 ~]# grep hjw /etc/passwd  # 查看用户文件 #

hjw:x:500:500::/home/hjw:/bin/bash

# 虽然没有设置为/sbin/nologin,但没有为账户设置口令用户无法使用该账户登陆,hjw只能作为服务帐户使用 #

[root@lvs-web1 ~]# grep hjw /etc/group  # 查看组文件 #

hjw:x:500: 

②、安装apache安装:

[root@lvs-web1 ~]# tar zxvf httpd-2.2.9.tar.gz

[root@lvs-web1 ~]# cd httpd-2.2.9

[root@lvs-web1 httpd-2.2.9]# ./configure --prefix=/usr/local/apache2 --disable-option-checking --enable-cache --enable-disk-cache --enable-mem-cache --enable-rewrite=shared

#配置简单说明

# --prefix=/usr/local/apache2 指定apache安装路径

# --disable-option-checking  不返回错误信息

# --enable-cache           启用缓

存# --enable-disk-cache       启用磁盘缓存

# --enable-mem-cache      启用内存缓存

# --enable-rewrite=shared   实时重写URL请求

[root@lvs-web1 httpd-2.2.9]# make && make install

[root@lvs-web1 ~]# apachectl -t -D DUMP_MODULES  # 查看apache加载的模块 #

Loaded Modules:……………省略…………………………………… 

cache_module (static)  # static-静态被编译到代码中,而非动态调用 

# disk_cache_module (static) mem_cache_module (static)

……………省略…………………………………… 

so_module (static)  # 允许在Apache启动和重启时加载DSO模块,而不用重新编译。Apache 2.2.9默认将so模块编译到httpd代码中  # rewrite_module (shared) 

# shared-DSO(Dynamic Shared Objects) 使该模块用于动态调用。关于DSO相关文档请参阅[url]http://beacon.blog.51cto.com/442731/94711[/url]  #Syntax OK

……………结束……………………………………

[root@lvs-web1 ~]#vi /usr/local/apache2/conf/httpd.conf  # 修改配置文件 #

……………省略……………………………………

User daemon   # 改成hjwGroup daemon  # 改成hjw

……………省略……………………………………

#ServerName [url]www.example.com:80[/url]

ServerName 127.0.0.1  #添加这一行

……………省略……………………………………

<Directory />

    Options FollowSymLinks

    AllowOverride None

    Order deny,allow

    Deny from all    # 改成 Allow from all

</Directory>……

………省略……………………………………

# Virtual hosts#Include conf/extra/httpd-vhosts.conf  #将这一行注释去掉

……………结束……………………………………

 [root@lvs-web1 ~]#vi /usr/local/apache2/conf/extra/httpd-vhosts.conf  #修改虚拟目录文件#

……………省略……………………………………

# 添加虚拟目录如下

<VirtualHost *:80>

#    ServerAdmin webmaster@dummy-host2.example.com 

    DocumentRoot "/usr/local/apache2/webapps"

     DirectoryIndex index.html

#    ServerName dummy-host2.example.com

#    ErrorLog "logs/dummy-host2.example.com-error_log"

#    CustomLog "logs/dummy-host2.example.com-access_log" common

</VirtualHost>

 <VirtualHost *:8090>

#    ServerAdmin webmaster@dummy-host2.example.com

     DocumentRoot "/usr/local/apache2/webapps"

     DirectoryIndex index1.html

#    ServerName dummy-host2.example.com

#    ErrorLog "logs/dummy-host2.example.com-error_log"

#    CustomLog "logs/dummy-host2.example.com-access_log" common

……………结束…………………………………… 

直接访问客户端访问[url]http://192.168.1.216[/url]


访问[url]http://192.168.1.216:8090[/url]

 

在/etc/rc.local中添加,使得apache在开机时启动

[root@lvs-web1 extra]# more /etc/rc.local

#!/bin/sh

## This script will be executed *after* all the other init scripts.# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff. touch /var/lock/subsys/local/usr/local/apache2/bin/apachectl start# shell end

 4、使用ipvs脚本实现单台lvs控制器转发请求到两台web服务器

⑴、lvs-dr1/lvs控制器

①、ipvs脚本

[root@lvs-dr1 bin]# pwd/usr/local/bin  # 控制器上ipvs脚本的路径 #

[root@lvs-dr1 bin]# ll lvsdr

-rwxr-xr-x 1 root root 1163 Aug 15 22:37 lvsdr# 脚本需要可执行权限,使用chmod 755 lvsdr给与脚本文件执行执行权限 #

[root@lvs-dr1 bin]# more lvsdr

#!/bin/bash

# 2008-08-19 by hjw

RIP1=192.168.1.216

RIP2=192.168.1.217

 VIP1=192.168.1.215

VIP2=192.168.1.220 

/etc/rc.d/init.d/functions

case "$1" in

start)

echo " start LVS of DirectorServer" 

# set the Virtual IP Address and sysctl parameter

/sbin/ifconfig eth0:0 $VIP1 broadcast $VIP1 netmask 255.255.255.255 up

/sbin/ifconfig eth0:1 $VIP2 broadcast $VIP2 netmask 255.255.255.255 up

/sbin/route add -host $VIP1 dev eth0:0

/sbin/route add -host $VIP2 dev eth0:1

echo "1" >/proc/sys/net/ipv4/ip_forward 

#Clear IPVS table/sbin/ipvsadm -C 

#set LVS

#Web Apache

/sbin/ipvsadm -A -t $VIP1:80 -s wlc -p 800

/sbin/ipvsadm -a -t $VIP1:80 -r $RIP1:80 -g

/sbin/ipvsadm -a -t $VIP1:80 -r $RIP2:80 -w 3 -g

/sbin/ipvsadm -a -t $VIP1:80 -r $RIP2:80 -w 3 -g 

/sbin/ipvsadm -a -t $VIP1:80 -r $RIP3:80 -g

/sbin/ipvsadm -A -t $VIP2:8090 -s wlc -p 1800

/sbin/ipvsadm -a -t $VIP2:8090 -r $RIP1:8090 -g

/sbin/ipvsadm -a -t $VIP2:8090 -r $RIP2:8090 -w 3 -g 

#Run LVS

/sbin/ipvsadm

;;

stop)

echo "close LVS Directorserver"echo "0" >/proc/sys/net/ipv4/ip_forward

/sbin/ipvsadm -C/sbin/ifconfig eth0:0 down#/sbin/ifconfig eth0:1 down

;;

*)

echo "Usage: $0 {start|stop}"

exit 1

;;

esac

# shell end 

②、lvs-dr1的网络配置

[root@lvs-dr1 bin]# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:F4:5B:1C

            inet addr:192.168.1.210  Bcast:192.168.1.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fef4:5b1c/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:3148 errors:0 dropped:0 overruns:0 frame:0

          TX packets:430 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:257945 (251.8 KiB)  TX bytes:59872 (58.4 KiB)

          Interrupt:177 Base address:0x1080  

eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:F4:5B:1C

             inet addr:192.168.1.215  Bcast:192.168.1.215  Mask:255.255.255.255

             UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

             Interrupt:177 Base address:0x1080  

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:F4:5B:1C

            inet addr:192.168.1.220  Bcast:192.168.1.220  Mask:255.255.255.255

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          Interrupt:177 Base address:0x1080  

lo        Link encap:Local Loopback

            inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:48 errors:0 dropped:0 overruns:0 frame:0

          TX packets:48 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:4000 (3.9 KiB)  TX bytes:4000 (3.9 KiB)


 ③、在/etc/rc.local中添加,脚本开机启动

[root@lvs-dr1 ~]# more /etc/rc.local 

#!/bin/sh

## This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff.

 touch /var/lock/subsys/local

/usr/local/bin/lvsdr start

# shell end

 ⑵、web服务器/ lvs-web1和lvs-web2

①、ipvs脚本

[root@lvs-web1 bin]# pwd/usr/local/bin  #web服务器上ipvs脚本的路径 #

[root@lvs-web1 bin]# ll lvs

-rwxr-xr-x 1 root root 874 Aug 15 05:18 lvs

# 脚本需要可执行权限,使用chmod 755 lvsdr给与脚本文件执行执行权限 #

[root@lvs-web1 bin]# more lvs

#!/bin/bash

#description:start realserver

#chkconfig 235 26 26

# 2008-08-19 by hjw 

VIP1=192.168.1.215

VIP2=192.168.1.220

/etc/rc.d/init.d/functions

case "$1" in

start)

echo " start LVS of REALServer"

/sbin/ifconfig lo:0 $VIP1 broadcast $VIP1 netmask 255.255.255.255 up

/sbin/ifconfig lo:1 $VIP2 broadcast $VIP2 netmask 255.255.255.255 up

echo "1" >/proc/sys/net/ipv4/conf/lo/arp_ignoreecho "2" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "1" >/proc/sys/net/ipv4/conf/all/arp_ignore

echo "2" >/proc/sys/net/ipv4/conf/all/arp_announce

;;

stop)

/sbin/ifconfig lo:0 down/sbin/ifconfig lo:1 down

echo "close LVS Directorserver"echo "0" >/proc/sys/net/ipv4/conf/lo/arp_ignore

echo "0" >/proc/sys/net/ipv4/conf/lo/arp_announce

echo "0" >/proc/sys/net/ipv4/conf/all/arp_ignoreecho "0" >/proc/sys/net/ipv4/conf/all/arp_announce

;;

*)echo "Usage: $0 {start|stop}"

exit 1

;;

esac

# shell end 

②、lvs-web1的网络配置(lvs-web2略)

[root@lvs-web1 bin]# ifconfig

eth0      Link encap:Ethernet  HWaddr 00:0C:29:8C:52:7E

            inet addr:192.168.1.216  Bcast:192.168.1.255  Mask:255.255.255.0

          inet6 addr: fe80::20c:29ff:fe8c:527e/64 Scope:Link

          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1

          RX packets:3671 errors:0 dropped:0 overruns:0 frame:0

          TX packets:475 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:1000

          RX bytes:301398 (294.3 KiB)  TX bytes:65170 (63.6 KiB)

          Interrupt:177 Base address:0x1080  

lo        Link encap:Local Loopback

            inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:16436  Metric:1

          RX packets:12 errors:0 dropped:0 overruns:0 frame:0

          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0

          RX bytes:904 (904.0 b)  TX bytes:904 (904.0 b)

lo:0    Link encap:Local Loopback

          inet addr:192.168.1.215  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:16436  Metric:1 

lo:1      Link encap:Local Loopback

            inet addr:192.168.1.220  Mask:255.255.255.255

          UP LOOPBACK RUNNING  MTU:16436  Metric:1 

③、在/etc/rc.local中添加,脚本开机启动

[root@lvs-web1 ~]# more /etc/rc.local 

#!/bin/sh

## This script will be executed *after* all the other init scripts.# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff. 

touch /var/lock/subsys/local

/usr/local/apache2/bin/apachectl start

/usr/local/bin/lvs start&

# shell end

 ⑶、测试转发

①、在客户端浏览器访问[url]http://192.168.1.215[/url]使用ipvsadm -l查看控制器转发状态。[root@lvs-dr1 bin]# watch ipvsadm -l 

查看lvs-web1状态

[root@lvs-web1 bin]# netstat -an|grep 80

[root@lvs-web1 bin]# netstat -an|grep 80|wc -l


查看lvs-web2状态


客户端返回的页面

 

由于lvs-web2的权重值大,客户端从lvs-web2得到请求的页面,说明整个系统运行正常。

 ②、模拟web服务器故障,将lvs-web2网络中断。用ipvsadm –l查看状态,有4个等待转发的不活动连接


最终因为lvs-web2的web服务不可用无法相应客户端请求,返回无法显示的页面    

5、使用keepalived实现单台lvs控制器转发请求到两台web服务器,并且lvs控制器监控web服务器的状态并自动添加删除web结点。

⑴、在lvs控制器上安装keepalived后需要停止原来的lvsdr脚本,然后根据需要对keepalived配置文件进行调整,而web服务器的脚本和网络配置不需要进行任何改动。

①、keepalived脚本

[root@lvs-dr1 keepalived]# pwd/usr/local/etc/keepalived # keepalived配置文件的路径 #

[root@lvs-dr1 keepalived]# ll keepalived.conf

-rw-r--r-- 1 root root 1627 Aug 16 09:13 keepalived.conf  # keepalived配置文件名 #

[root@lvs-dr1 keepalived]# more keepalived.conf

! Configuration File for keepalived 

global_defs {

  # notification_email {

  #   acassen@firewall.loc

  #   failover@firewall.loc

  #   sysadmin@firewall.loc

  # }

  # notification_email_from Alexandre.Cassen@firewall.loc  

# smtp_server 192.168.200.1

# smtp_connect_timeout 30   router_id LVS_DEVEL} 

# 2008-08-19 by hjw#

VIP1vrrp_instance VI_1 {

    state MASTER

    interface eth0

    virtual_router_id 51

    priority 200

    advert_int 1

    authentication {

        auth_type PASS

        auth_pass 1111    }

    virtual_ipaddress {

        192.168.1.215

        192.168.1.220

    }}

 virtual_server 192.168.1.215 80 {

    delay_loop 6

    lb_algo wlc

    lb_kind DR

    persistence_timeout 50

    protocol TCP

     real_server 192.168.1.216 80 {

        weight 1

        TCP_CHECK {

        connect_timeout 3

        nb_get_retry 3

        delay_before_retry 3

        connect_port 80

        }    }

    real_server 192.168.1.217 80 {

        weight 3

        TCP_CHECK {

        connect_timeout 3

        nb_get_retry 3

        delay_before_retry 3

        connect_port 80

        }

     }}

 virtual_server 192.168.1.220 8090 {

    delay_loop 6

    lb_algo wlc

    lb_kind DR

    persistence_timeout 50

    protocol TCP

     real_server 192.168.1.216 8090 {

        weight 1

        TCP_CHECK {

        connect_timeout 3

        nb_get_retry 3

        delay_before_retry 3

        connect_port 8090

        }    }

    real_server 192.168.1.217 8090 {

        weight 3

        TCP_CHECK {

        connect_timeout 3

        nb_get_retry 3

        delay_before_retry 3

        connect_port 8090

        }

     }}

# shell end

 ②、在/etc/rc.local中添加,脚本开机启动

[root@lvs-dr1 ~]# more /etc/rc.local 

#!/bin/sh## This script will be executed *after* all the other init scripts.

# You can put your own initialization stuff in here if you don't

# want to do the full Sys V style init stuff. 

touch /var/lock/subsys/local

#/usr/local/bin/lvsdr start

/usr/local/sbin/keepalived -D -f /usr/local/etc/keepalived/keepalived.conf

# shell end

 ⑵、用户请求转发功能测试由于转发功能测试在步骤4中已经进行,这里省略。

 ⑶、自动删除添加结点功能测试测试删除/添加结点需要模拟web服务器故障,其中包括手动终止服务和中断网络(相当于宕机),测试以lvs-web2为例。

①、手动停止lvs-web2上的httpd服务

控制器lvs-dr1上监控转发状态。当监测到lvs-web2上的服务停止后大概2-3秒,keepalived自动将这个结点从转发列表中删除。


此时,用户请求将只被转发到lvs-web1上。              

启动httpd服务之后5秒之内lvs-web2又重新被加到转发列表中,此时用户请求可以被转发到lvs-web1和lvs-web2但由于web2的权重值大所以用户再次请求[url]http://192.168.1.215[/url]时请求被转发到lvs-web2。

 ②、中断网络,


控制器通过发送ICMP包判断lvs-web2已经不存在,所以将它从转发列表中删除。lvs-dr1会根据配置文件不断监测real_server,当lvs-web2重新上线并能够提供web服务之后dr1又会重新将web2添加到转发列表中。这个过程删除和添加web结点的过程与上面停止httpd服务的完全一致的,在此不再赘述。这些过程只是在服务器端进行,对于用户来说是完全透明的,他们并不会感受到服务器端正在发生的变化。


 6、避免lvs控制器单点故障,添加lvs-dr2到群集中使用keepalived实现控制器的主备。

⑴、keepalived配置文件

[root@lvs-dr1 keepalived]# pwd/usr/local/etc/keepalived  # keepalived配置文件位置 #

[root@lvs-dr1 keepalived]# ll keepalived.conf

-rw-r--r-- 1 root root 1624 Aug 16 23:19 keepalived.conf

[root@lvs-dr1 keepalived]# more keepalived.conf

! Configuration File for keepalived global_defs {  # notification_email {  #   acassen@firewall.loc  #   failover@firewall.loc  #   sysadmin@firewall.loc  # }  # notification_email_from Alexandre.Cassen@firewall.loc  # smtp_server 192.168.200.1  # smtp_connect_timeout 30   router_id LVS_DEVEL} # 2008-08-19 by hjwvrrp_instance VI_1 {    state MASTER  # lvs-dr2为BACKUP #    interface eth0    virtual_router_id 51    priority 200  # lvs-dr2为100 #    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.1.215        192.168.1.220    }} virtual_server 192.168.1.215 80 {    delay_loop 6    lb_algo wlc    lb_kind DR    persistence_timeout 50    protocol TCP     real_server 192.168.1.216 80 {        weight 1        TCP_CHECK {        connect_timeout 3        nb_get_retry 3        delay_before_retry 3        connect_port 80        }    }    real_server 192.168.1.217 80 {        weight 3        TCP_CHECK {        connect_timeout 3        nb_get_retry 3        delay_before_retry 3        connect_port 80        }     }} virtual_server 192.168.1.220 8090 {    delay_loop 6    lb_algo wlc    lb_kind DR    persistence_timeout 50    protocol TCP     real_server 192.168.1.216 8090 {        weight 1        TCP_CHECK {        connect_timeout 3        nb_get_retry 3        delay_before_retry 3        connect_port 8090        }    }    real_server 192.168.1.217 8090 {        weight 3        TCP_CHECK {        connect_timeout 3        nb_get_retry 3        delay_before_retry 3        connect_port 8090        }     }}# shell end ⑵、lvs-dr1启动,lvs-dr2关机 

服务器 状态 日志
lvs-dr1 启动
lvs-dr2 关机

lvs-dr1启动后日志……………省略……………………………………Aug 17 00:53:40 lvs-dr1 Keepalived: Starting Keepalived v1.1.15 (08/04,2008) Aug 17 00:53:40 lvs-dr1 Keepalived: Starting Healthcheck child process, pid=1671  #启动Healthcheck进程#Aug 17 00:53:40 lvs-dr1 Keepalived: Starting VRRP child process, pid=1681  #启动VRRP child进程#Aug 17 00:53:40 lvs-dr1 Keepalived_vrrp: Using MII-BMSR NIC polling thread...Aug 17 00:53:40 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.210 addedAug 17 00:53:40 lvs-dr1 Keepalived_vrrp: Registering Kernel netlink reflectorAug 17 00:53:40 lvs-dr1 Keepalived_vrrp: Registering Kernel netlink command channelAug 17 00:53:40 lvs-dr1 Keepalived_vrrp: Registering gratutious ARP shared channelAug 17 00:53:41 lvs-dr1 kernel: IPVS: Registered protocols (TCP, UDP, AH, ESP)Aug 17 00:53:41 lvs-dr1 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes)Aug 17 00:53:41 lvs-dr1 kernel: IPVS: ipvs loaded.Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.210 addedAug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Registering Kernel netlink reflectorAug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Registering Kernel netlink command channelAug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 00:53:41 lvs-dr1 Keepalived_vrrp: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Configuration is using : 16607 BytesAug 17 00:53:41 lvs-dr1 Keepalived_vrrp: Configuration is using : 35534 BytesAug 17 00:53:41 lvs-dr1 kernel: IPVS: [wlc] scheduler registered.Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:80]Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:80]Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:8090]Aug 17 00:53:41 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:8090]Aug 17 00:53:41 lvs-dr1 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(9,10)]Aug 17 00:53:42 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEAug 17 00:53:43 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE  #dr1置成master状态#Aug 17 00:53:43 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Aug 17 00:53:43 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.215 addedAug 17 00:53:43 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.220 addedAug 17 00:53:43 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215  #arp绑定ip 192.168.1.215到eth0#Aug 17 00:53:43 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220Aug 17 00:53:43 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.215 addedAug 17 00:53:43 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.220 addedAug 17 00:53:48 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215Aug 17 00:53:48 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220……………结束…………………………………… ⑶、lvs-dr1开机,lvs-dr2启动  lvs-dr2日志

服务器 状态 日志
lvs-dr1 开机
lvs-dr2 启动

lvs-dr2日志启动日志……………省略……………………………………Aug 17 03:49:21 lvs-dr2 Keepalived: Starting Keepalived v1.1.15 (08/04,2008) Aug 17 03:49:21 lvs-dr2 Keepalived: Starting Healthcheck child process, pid=1656Aug 17 03:49:21 lvs-dr2 Keepalived_vrrp: Using MII-BMSR NIC polling thread...Aug 17 03:49:21 lvs-dr2 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.211 addedAug 17 03:49:21 lvs-dr2 Keepalived_vrrp: Registering Kernel netlink reflectorAug 17 03:49:21 lvs-dr2 Keepalived_vrrp: Registering Kernel netlink command channelAug 17 03:49:21 lvs-dr2 Keepalived_vrrp: Registering gratutious ARP shared channelAug 17 03:49:21 lvs-dr2 Keepalived: Starting VRRP child process, pid=1658Aug 17 03:49:22 lvs-dr2 kernel: IPVS: Registered protocols (TCP, UDP, AH, ESP)Aug 17 03:49:22 lvs-dr2 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes)Aug 17 03:49:22 lvs-dr2 kernel: IPVS: ipvs loaded.Aug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...Aug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.211 addedAug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Registering Kernel netlink reflectorAug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Registering Kernel netlink command channelAug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 03:49:22 lvs-dr2 Keepalived_vrrp: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 03:49:22 lvs-dr2 Keepalived_healthcheckers: Configuration is using : 16607 BytesAug 17 03:49:22 lvs-dr2 Keepalived_vrrp: Configuration is using : 35534 BytesAug 17 03:49:23 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE  # dr2置成backup状态#Aug 17 03:49:23 lvs-dr2 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(9,10)]Aug 17 03:49:23 lvs-dr2 kernel: IPVS: [wlc] scheduler registered.Aug 17 03:49:23 lvs-dr2 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:80]Aug 17 03:49:23 lvs-dr2 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:80]Aug 17 03:49:23 lvs-dr2 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:8090]Aug 17 03:49:23 lvs-dr2 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:8090]……………结束…………………………………… ⑷、lvs-dr1关机,lvs-dr2开机  lvs-dr2日志

服务器 状态 日志
lvs-dr1 关闭
lvs-dr2 开机

lvs-dr2日志……………省略……………………………………Aug 17 03:52:11 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEAug 17 03:52:12 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE  # dr2置成master状态#Aug 17 03:52:12 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Aug 17 03:52:12 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215Aug 17 03:52:12 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220Aug 17 03:52:12 lvs-dr2 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.215 addedAug 17 03:52:12 lvs-dr2 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.220 addedAug 17 03:52:12 lvs-dr2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.215 addedAug 17 03:52:12 lvs-dr2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.220 addedAug 17 03:52:17 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215Aug 17 03:52:17 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220……………结束…………………………………… ⑸、lvs-dr1启动,lvs-dr2开机  lvs-dr1和lvs-dr2日志

服务器 状态 日志
lvs-dr1 启动
lvs-dr2 开机

lvs-dr1启动日志……………结束……………………………………Aug 17 01:01:32 lvs-dr1 Keepalived: Starting Keepalived v1.1.15 (08/04,2008) Aug 17 01:01:32 lvs-dr1 Keepalived: Starting Healthcheck child process, pid=1688Aug 17 01:01:32 lvs-dr1 Keepalived: Starting VRRP child process, pid=1690Aug 17 01:01:32 lvs-dr1 Keepalived_vrrp: Using MII-BMSR NIC polling thread...Aug 17 01:01:32 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.210 addedAug 17 01:01:32 lvs-dr1 Keepalived_vrrp: Registering Kernel netlink reflectorAug 17 01:01:32 lvs-dr1 Keepalived_vrrp: Registering Kernel netlink command channelAug 17 01:01:32 lvs-dr1 Keepalived_vrrp: Registering gratutious ARP shared channelAug 17 01:01:33 lvs-dr1 kernel: IPVS: Registered protocols (TCP, UDP, AH, ESP)Aug 17 01:01:33 lvs-dr1 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes)Aug 17 01:01:33 lvs-dr1 kernel: IPVS: ipvs loaded.Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Using MII-BMSR NIC polling thread...Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.210 addedAug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Registering Kernel netlink reflectorAug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Registering Kernel netlink command channelAug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 01:01:33 lvs-dr1 Keepalived_vrrp: Opening file '/usr/local/etc/keepalived/keepalived.conf'. Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Configuration is using : 16607 BytesAug 17 01:01:33 lvs-dr1 Keepalived_vrrp: Configuration is using : 35534 BytesAug 17 01:01:33 lvs-dr1 kernel: IPVS: [wlc] scheduler registered.Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:80]Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:80]Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.216:8090]Aug 17 01:01:33 lvs-dr1 Keepalived_healthcheckers: Activating healtchecker for service [192.168.1.217:8090]Aug 17 01:01:33 lvs-dr1 Keepalived_vrrp: VRRP sockpool: [ifindex(2), proto(112), fd(9,10)]Aug 17 01:01:34 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Transition to MASTER STATEAug 17 01:01:35 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Entering MASTER STATE  # dr1置成master状态#Aug 17 01:01:35 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) setting protocol VIPs.Aug 17 01:01:35 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215Aug 17 01:01:35 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220Aug 17 01:01:35 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.215 addedAug 17 01:01:35 lvs-dr1 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.220 addedAug 17 01:01:35 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.215 addedAug 17 01:01:35 lvs-dr1 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.220 addedAug 17 01:01:40 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.215Aug 17 01:01:40 lvs-dr1 Keepalived_vrrp: VRRP_Instance(VI_1) Sending gratuitous ARPs on eth0 for 192.168.1.220……………结束…………………………………… lvs-dr2日志……………省略……………………………………Aug 17 03:54:11 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Received higher prio advert  # 收到优先级别更高的通告 #Aug 17 03:54:11 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) Entering BACKUP STATE  # dr2置成backup状态#Aug 17 03:54:11 lvs-dr2 Keepalived_vrrp: VRRP_Instance(VI_1) removing protocol VIPs.   # 移除VIP绑定 #Aug 17 03:54:11 lvs-dr2 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.215 removedAug 17 03:54:11 lvs-dr2 Keepalived_vrrp: Netlink reflector reports IP 192.168.1.220 removedAug 17 03:54:11 lvs-dr2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.215 removedAug 17 03:54:11 lvs-dr2 Keepalived_healthcheckers: Netlink reflector reports IP 192.168.1.220 removed……………结束…………………………………… 讨论、结论和参考资料整体方案实施后补充待续……..五、测试讨论六、测试结论七、参考资料








      本文转自rshare 51CTO博客,原文链接:http://blog.51cto.com/1364952/1965468,如需转载请自行联系原作者



相关实践学习
部署高可用架构
本场景主要介绍如何使用云服务器ECS、负载均衡SLB、云数据库RDS和数据传输服务产品来部署多可用区高可用架构。
负载均衡入门与产品使用指南
负载均衡(Server Load Balancer)是对多台云服务器进行流量分发的负载均衡服务,可以通过流量分发扩展应用系统对外的服务能力,通过消除单点故障提升应用系统的可用性。 本课程主要介绍负载均衡的相关技术以及阿里云负载均衡产品的使用方法。
相关文章
|
16天前
|
人工智能 搜索推荐 Serverless
使用金庸的著作,来测试阿里通义千问最新开放的长文档处理功能
使用金庸的著作,来测试阿里通义千问最新开放的长文档处理功能
36 7
使用金庸的著作,来测试阿里通义千问最新开放的长文档处理功能
|
25天前
|
SQL 安全 测试技术
【软件设计师备考 专题 】测试要求说明书的编写和应用
【软件设计师备考 专题 】测试要求说明书的编写和应用
55 0
|
25天前
|
设计模式 安全 测试技术
【软件设计师备考 专题 】系统实施:程序设计和系统测试
【软件设计师备考 专题 】系统实施:程序设计和系统测试
62 0
|
29天前
|
安全 测试技术
现代软件测试中的自动化技术应用及挑战
在当今数字化时代,软件测试的重要性日益凸显。本文探讨了现代软件测试中自动化技术的应用现状和挑战,分析了自动化测试在提高效率、降低成本、增强可靠性等方面的优势,同时也提出了自动化测试所面临的挑战和解决方案。
|
1月前
|
机器学习/深度学习 数据采集 人工智能
探索AI在软件测试中的应用与挑战
【2月更文挑战第25天】 随着人工智能(AI)技术的迅猛发展,其在软件测试领域的应用逐渐深入。AI不仅改变了传统测试流程,提高了测试效率和质量,也引入了新的挑战。本文将详细探讨AI在软件测试中的具体应用,包括智能化测试用例生成、缺陷预测、自动化测试执行等,并分析当前面临的主要挑战,如数据质量、模型泛化能力和工具集成等问题。通过实例分析和研究展望,本文旨在为软件测试专业人士提供一个关于AI技术融合的全面视角。
|
20天前
|
Web App开发 Java 测试技术
深入理解与应用软件自动化测试工具Selenium
随着软件开发的快速发展,软件测试在保证产品质量方面发挥着越来越重要的作用。其中,自动化测试以其效率高、成本低的特点受到了广大开发者的欢迎。本文主要介绍了自动化测试工具Selenium的基本概念、原理以及在实际开发中的应用,旨在帮助读者更好地理解和使用Selenium进行高效的自动化测试。
22 4
|
26天前
|
设计模式 敏捷开发 监控
深入理解与应用软件自动化测试框架
在快速迭代的软件开发过程中,自动化测试已成为确保产品质量和加快交付速度的关键因素。本文将详细探讨自动化测试框架的核心原理、设计模式及其在实际项目中的应用。我们将分析几种流行的自动化测试工具,如Selenium、Appium和JUnit,并讨论它们如何集成以形成强大的测试解决方案。文章还将展示通过自定义框架来满足特定测试需求的实例,以及如何通过持续集成和持续部署(CI/CD)流程优化测试实践。
|
29天前
|
缓存 运维 Serverless
应用研发平台EMAS产品常见问题之测试检查更新没有反应如何解决
应用研发平台EMAS(Enterprise Mobile Application Service)是阿里云提供的一个全栈移动应用开发平台,集成了应用开发、测试、部署、监控和运营服务;本合集旨在总结EMAS产品在应用开发和运维过程中的常见问题及解决方案,助力开发者和企业高效解决技术难题,加速移动应用的上线和稳定运行。
|
29天前
|
敏捷开发 供应链 测试技术
深入理解与应用软件测试中的Mock技术
【2月更文挑战第30天】 在现代软件开发过程中,单元测试是保证代码质量的重要手段。然而,对于高度依赖外部系统或服务的应用来说,传统的单元测试方法往往难以实施。Mock技术应运而生,它通过模拟外部依赖的响应,使开发者可以在隔离的环境中测试目标代码。本文将探讨Mock技术的概念、应用场景以及如何在软件测试中有效地使用Mock对象,以增强测试的灵活性和可靠性。
|
1月前
|
数据采集 机器学习/深度学习 人工智能
探索AI在软件测试中的应用与挑战
【2月更文挑战第28天】 随着人工智能(AI)技术的迅猛发展,其在软件测试领域的应用逐渐显现出巨大的潜力和价值。本文将深入探讨AI在软件测试中的具体应用,包括自动化测试、智能化缺陷检测、测试用例生成等方面,并分析其面临的挑战,如数据质量、模型泛化能力、技术融合等。通过对比分析和案例研究,旨在为软件测试领域提供新的视角和解决方案。
88 1