Mysql 基于 Amoeba 的 读写分离

  1. 云栖社区>
  2. 博客>
  3. 正文

Mysql 基于 Amoeba 的 读写分离

温酒斩华佗 2017-09-21 13:25:00 浏览647
展开阅读全文

首先说明一下amoeba 跟 MySQL proxy在读写分离的使用上面的区别:

 

在MySQL proxy 6.0版本 上面如果想要读写分离并且 读集群、写集群 机器比较多情况下,用mysql proxy 需要相当大的工作量,目前mysql proxy没有现成的 lua脚本。mysql proxy根本没有配置文件, lua脚本就是它的全部,当然lua是相当方便的。那么同样这种东西需要编写大量的脚本才能完成一 个复杂的配置。而Amoeba只需要进行相关的配置就可以满足需求。


假设有这样的使用场景,有三个数据库节点分别命名为Master、Slave1、Slave2如下:

 

Amoeba: Amoeba <192.168.14.129>

Master: Master <192.168.14.131> (可读写)

Slaves:Slave1 <192.168.14.132>、Slave2<192.168.14.133> (2个平等的数据库。只读/负载均衡)

 

在 主从数据库 的复制的部分, 任然需要使用数据库自己的复制机制。 Amoeba 不提供复制功能。

1. 起动数据库的主从复制功能。

 

a. 修改配置文件

 

master.cnf

server-id = 1 #主数据库标志  
  
# 增加 日志文件, 用于验证读写分离  
log = /home/mysql/mysql/log/mysql.log  

slave1.cnf

1 server-id = 2  
2   
3 # 增加 日志文件, 用于验证读写分离  
4 log = /home/mysql/mysql/log/mysql.log  

slave2.cnf

1 server-id = 3  
2   
3 # 增加 日志文件, 用于验证读写分离  
4 log = /home/mysql/mysql/log/mysql.log  

b. Master 中 创建两个 只读权限 的用户。 用户名均为:repl_user   密码均为:copy  分别开放给 slave1, slave2

1 mysql> grant replication slave on *.* to repl_user@192.168.14.132 identified by 'copy';  
2   
3 mysql> grant replication slave on *.* to repl_user@192.168.14.133 identified by 'copy';  

c. 查看 Master 信息

1 mysql> show master status;  
2 +------------------+----------+--------------+------------------+  
3 | File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |  
4 +------------------+----------+--------------+------------------+  
5 | mysql-bin.000017 |     2009 |              |                  |  
6 +------------------+----------+--------------+------------------+  
7 1 row in set (0.00 sec)  

d. Slave1 ,Slave2 中 启动 Master - Slave 复制功能。分别执行以下命令:

 1 mysql> slave stop;  
 2 Query OK, 0 rows affected (0.02 sec)  
 3   
 4 mysql> change master to  
 5     -> master_host='192.168.14.131',  
 6     -> master_user='repl_user',  
 7     -> master_password='copy',  
 8     -> master_log_file='mysql-bin.000017',  
 9     -> master_log_pos=2009;  
10 Query OK, 0 rows affected (0.03 sec)  
11   
12 mysql> start slave;  
13 Query OK, 0 rows affected (0.00 sec)  

2. Amoeba 读写分离的配置

 

a. Master , Slave1 ,Slave2 中开放权限给 Amoeba 访问。在 Master , Slave1 , Slave2 中分别执行:

1 mysql> grant all on test.* to test_user@192.168.14.129 indentified by '1234'; 

Amoeba 访问三个数据库的账号密码相同。

b. 修改 Amoeba 的配置文件,配置文件详细说明请查看 官方文档:http://docs.hexnova.com/amoeba/rw-splitting.html

dbServer.xml

<?xml version="1.0" encoding="gbk"?>  
  
<!DOCTYPE amoeba:dbServers SYSTEM "dbserver.dtd">  
<amoeba:dbServers xmlns:amoeba="http://amoeba.meidusa.com/">  
  
        <!--   
            Each dbServer needs to be configured into a Pool,  
            If you need to configure multiple dbServer with load balancing that can be simplified by the following configuration:  
             add attribute with name virtual = "true" in dbServer, but the configuration does not allow the element with name factoryConfig  
             such as 'multiPool' dbServer     
        -->  
  
    <!-- 数据库连接配置的公共部分 -->   
    <dbServer name="abstractServer" abstractive="true">  
        <factoryConfig class="com.meidusa.amoeba.mysql.net.MysqlServerConnectionFactory">  
            <property name="manager">${defaultManager}</property>  
            <property name="sendBufferSize">64</property>  
            <property name="receiveBufferSize">128</property>  
                  
            <!-- mysql port -->  
            <property name="port">3306</property>  
              
            <!-- mysql schema -->  
            <property name="schema">test</property>  
              
            <!-- mysql user -->  
            <property name="user">test_user</property>  
              
            <!--  mysql password -->  
            <property name="password">1234</property>  
              
        </factoryConfig>  
  
        <poolConfig class="com.meidusa.amoeba.net.poolable.PoolableObjectPool">  
            <property name="maxActive">500</property>  
            <property name="maxIdle">500</property>  
            <property name="minIdle">10</property>  
            <property name="minEvictableIdleTimeMillis">600000</property>  
            <property name="timeBetweenEvictionRunsMillis">600000</property>  
            <property name="testOnBorrow">true</property>  
            <property name="testWhileIdle">true</property>  
        </poolConfig>  
    </dbServer>  
  
    <!-- Master ,Slave1, Slave2 的独立部分,也就只有 IP 了 -->  
    <dbServer name="master"  parent="abstractServer">  
        <factoryConfig>  
            <!-- mysql ip -->  
            <property name="ipAddress">192.168.14.131</property>  
        </factoryConfig>  
    </dbServer>  
  
    <dbServer name="slave1"  parent="abstractServer">  
        <factoryConfig>  
            <!-- mysql ip -->  
            <property name="ipAddress">192.168.14.132</property>  
        </factoryConfig>  
    </dbServer>  
  
    <dbServer name="slave2"  parent="abstractServer">  
        <factoryConfig>  
            <!-- mysql ip -->  
            <property name="ipAddress">192.168.14.133</property>  
        </factoryConfig>  
    </dbServer>     
      
    <!-- 数据库池,虚拟服务器,实现读取的负载均衡 -->  
    <dbServer name="slaves" virtual="true">  
        <poolConfig class="com.meidusa.amoeba.server.MultipleServerPool">  
            <!-- Load balancing strategy: 1=ROUNDROBIN , 2=WEIGHTBASED , 3=HA-->  
            <property name="loadbalance">1</property>  
              
            <!-- Separated by commas,such as: server1,server2,server1 -->  
            <property name="poolNames">slave1,slave2</property>  
        </poolConfig>  
    </dbServer>  
          
</amoeba:dbServers>  

amoeba.xml

  1 <?xml version="1.0" encoding="gbk"?>  
  2   
  3 <!DOCTYPE amoeba:configuration SYSTEM "amoeba.dtd">  
  4 <amoeba:configuration xmlns:amoeba="http://amoeba.meidusa.com/">  
  5   
  6     <proxy>  
  7       
  8         <!-- service class must implements com.meidusa.amoeba.service.Service -->  
  9         <service name="Amoeba for Mysql" class="com.meidusa.amoeba.net.ServerableConnectionManager">  
 10             <!-- Amoeba 端口号 -->  
 11             <property name="port">8066</property>  
 12               
 13             <!-- bind ipAddress -->  
 14             <!--  
 15             <property name="ipAddress">127.0.0.1</property> 
 16              -->  
 17               
 18             <property name="manager">${clientConnectioneManager}</property>  
 19               
 20             <property name="connectionFactory">  
 21                 <bean class="com.meidusa.amoeba.mysql.net.MysqlClientConnectionFactory">  
 22                     <property name="sendBufferSize">128</property>  
 23                     <property name="receiveBufferSize">64</property>  
 24                 </bean>  
 25             </property>  
 26               
 27             <property name="authenticator">  
 28                 <bean class="com.meidusa.amoeba.mysql.server.MysqlClientAuthenticator">  
 29                     <!-- Amoeba 账号,密码 -->  
 30                     <property name="user">root</property>  
 31                       
 32                     <property name="password">root</property>  
 33                       
 34                     <property name="filter">  
 35                         <bean class="com.meidusa.amoeba.server.IPAccessController">  
 36                             <property name="ipFile">${amoeba.home}/conf/access_list.conf</property>  
 37                         </bean>  
 38                     </property>  
 39                 </bean>  
 40             </property>  
 41               
 42         </service>  
 43           
 44         <!-- server class must implements com.meidusa.amoeba.service.Service -->  
 45         <service name="Amoeba Monitor Server" class="com.meidusa.amoeba.monitor.MonitorServer">  
 46             <!-- port -->  
 47             <!--  default value: random number  
 48             <property name="port">9066</property>  
 49             -->  
 50             <!-- bind ipAddress -->  
 51             <property name="ipAddress">127.0.0.1</property>  
 52             <property name="daemon">true</property>  
 53             <property name="manager">${clientConnectioneManager}</property>  
 54             <property name="connectionFactory">  
 55                 <bean class="com.meidusa.amoeba.monitor.net.MonitorClientConnectionFactory"></bean>  
 56             </property>  
 57               
 58         </service>  
 59           
 60         <runtime class="com.meidusa.amoeba.mysql.context.MysqlRuntimeContext">  
 61             <!-- proxy server net IO Read thread size -->  
 62             <property name="readThreadPoolSize">20</property>  
 63               
 64             <!-- proxy server client process thread size -->  
 65             <property name="clientSideThreadPoolSize">30</property>  
 66               
 67             <!-- mysql server data packet process thread size -->  
 68             <property name="serverSideThreadPoolSize">30</property>  
 69               
 70             <!-- per connection cache prepared statement size  -->  
 71             <property name="statementCacheSize">500</property>  
 72               
 73             <!-- query timeout( default: 60 second , TimeUnit:second) -->  
 74             <property name="queryTimeout">60</property>  
 75         </runtime>  
 76           
 77     </proxy>  
 78       
 79     <!--   
 80         Each ConnectionManager will start as thread  
 81         manager responsible for the Connection IO read , Death Detection  
 82     -->  
 83     <connectionManagerList>  
 84         <connectionManager name="clientConnectioneManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">  
 85             <property name="subManagerClassName">com.meidusa.amoeba.net.ConnectionManager</property>  
 86             <!--   
 87               default value is avaliable Processors   
 88             <property name="processors">5</property>  
 89              -->  
 90         </connectionManager>  
 91         <connectionManager name="defaultManager" class="com.meidusa.amoeba.net.MultiConnectionManagerWrapper">  
 92             <property name="subManagerClassName">com.meidusa.amoeba.net.AuthingableConnectionManager</property>  
 93               
 94             <!--   
 95               default value is avaliable Processors   
 96             <property name="processors">5</property>  
 97              -->  
 98         </connectionManager>  
 99     </connectionManagerList>  
100       
101         <!-- default using file loader -->  
102     <dbServerLoader class="com.meidusa.amoeba.context.DBServerConfigFileLoader">  
103         <property name="configFile">${amoeba.home}/conf/dbServers.xml</property>  
104     </dbServerLoader>  
105       
106     <queryRouter class="com.meidusa.amoeba.mysql.parser.MysqlQueryRouter">  
107           
108         <property name="ruleLoader">  
109             <bean class="com.meidusa.amoeba.route.TableRuleFileLoader">  
110                 <property name="ruleFile">${amoeba.home}/conf/rule.xml</property>  
111                 <property name="functionFile">${amoeba.home}/conf/ruleFunctionMap.xml</property>  
112             </bean>  
113         </property>  
114         <property name="sqlFunctionFile">${amoeba.home}/conf/functionMap.xml</property>  
115           
116         <property name="LRUMapSize">1500</property>  
117         <!-- 默认数据库,主数据库 -->  
118         <property name="defaultPool">master</property>  
119           
120         <!-- 写数据库 -->  
121         <property name="writePool">master</property>  
122         <!-- 读数据库,dbServer.xml 中配置的 虚拟数据库,数据库池 -->  
123         <property name="readPool">slaves</property>  
124           
125         <property name="needParse">true</property>  
126     </queryRouter>  
127 </amoeba:configuration>  

rule.xml

1 <?xml version="1.0" encoding="gbk"?>  
2 <!DOCTYPE amoeba:rule SYSTEM "rule.dtd">  
3   
4 <amoeba:rule xmlns:amoeba="http://amoeba.meidusa.com/">  
5     <tableRule name="message" schema="test" defaultPools="server1">  
6     </tableRule>  
7 </amoeba:rule> 

不需要 数据库分片时,不用配置。 但是不能没有 tableRule 元素, 否则报错。 随便写个空规则就行了。

3. 测试读写分离

a. 在 Master , Slave1 , Slave2  中分别查看 日志文件: mysql.log

tail -f ./log/mysql.log  

b. 启动 Amoeba, 使用 Mysql GUI Tools 连接 Amoeba



执行以上几个命令。 查看日志内容。

 

Master  mysql.log

[mysql@prx1 mysql]$ tail -f log/mysql.log  
                  370 Query     SET SESSION sql_mode=''  
                  370 Query     SET NAMES utf8  
                  370 Query     SHOW FULL TABLES  
                  370 Query     SHOW COLUMNS FROM `t_message`  
                  370 Query     SHOW COLUMNS FROM `t_user`  
                  370 Query     SHOW PROCEDURE STATUS  
                  370 Query     SHOW FUNCTION STATUS  
110813 15:21:11   370 Query     SHOW VARIABLES LIKE 'character_set_server'  
                  370 Query     SHOW FULL COLUMNS FROM `test`.`t_message`  
110813 15:21:12   370 Query     SHOW CREATE TABLE `test`.`t_message`  
110813 15:22:40   374 Connect   test_user@192.168.14.129 on test  
                  375 Connect   test_user@192.168.14.129 on test  
                  376 Connect   test_user@192.168.14.129 on test  
110813 15:23:40   370 Query     insert into t_message values(1, 'c1')  
110813 15:24:07   377 Connect   test_user@192.168.14.129 on test  
                  378 Connect   test_user@192.168.14.129 on test  
                  379 Connect   test_user@192.168.14.129 on test  
110813 15:24:15   370 Query     insert into t_user values(8, 'n8', 'p8')  
110813 15:24:24   370 Query     SHOW FULL COLUMNS FROM `test`.`t_user`  
                  370 Query     SHOW CREATE TABLE `test`.`t_user`  
110813 15:24:35   370 Query     SHOW FULL COLUMNS FROM `test`.`t_message`  
                  370 Query     SHOW CREATE TABLE `test`.`t_message`  

 Slave1  mysql.log

[mysql@prx2 mysql]$ tail -f log/mysql.log  
                  317 Connect   test_user@192.168.14.129 on test  
                  318 Connect   test_user@192.168.14.129 on test  
110813 15:35:30   315 Query     SELECT @@sql_mode  
110813 15:35:32   315 Query     SELECT @@sql_mode  
110813 15:35:44   315 Query     SELECT @@SQL_MODE  
110813 15:35:46   315 Query     SELECT @@SQL_MODE  
110813 15:37:18   319 Connect   test_user@192.168.14.129 on test  
                  320 Connect   test_user@192.168.14.129 on test  
110813 15:37:19   321 Connect   test_user@192.168.14.129 on test  
110813 15:37:26   246 Quit  
110813 15:38:21   315 Query     SELECT @@SQL_MODE  
110813 15:38:22    42 Query     BEGIN  
                   42 Query     insert into t_message values(1, 'c1')  
                   42 Query     COMMIT /* implicit, from Xid_log_event */  
110813 15:38:50   322 Connect   test_user@192.168.14.129 on test  
                  323 Connect   test_user@192.168.14.129 on test  
                  324 Connect   test_user@192.168.14.129 on test  
110813 15:38:58    42 Query     BEGIN  
                   42 Query     insert into t_user values(8, 'n8', 'p8')  
                   42 Query     COMMIT /* implicit, from Xid_log_event */  
110813 15:39:08   315 Query     SELECT @@SQL_MODE  
110813 15:39:19   315 Query     SELECT @@SQL_MODE  
110813 15:44:08   325 Connect   test_user@192.168.14.129 on test  
                  326 Connect   test_user@192.168.14.129 on test  
                  327 Connect   test_user@192.168.14.129 on test  

Slave2  mysql.log

[mysql@prx3 mysql]$ tail -f log/mysql.log  
110813 15:35:08   305 Connect   test_user@192.168.14.129 on test  
                  306 Connect   test_user@192.168.14.129 on test  
                  307 Connect   test_user@192.168.14.129 on test  
110813 15:35:31   304 Query     SELECT @@sql_mode  
                  304 Query     SELECT @@sql_mode  
110813 15:35:44   304 Query     SELECT @@SQL_MODE  
110813 15:35:46   304 Query     SELECT * FROM t_message t  
110813 15:37:18   308 Connect   test_user@192.168.14.129 on test  
                  309 Connect   test_user@192.168.14.129 on test  
                  310 Connect   test_user@192.168.14.129 on test  
110813 15:38:21     8 Query     BEGIN  
                    8 Query     insert into t_message values(1, 'c1')  
                    8 Query     COMMIT /* implicit, from Xid_log_event */  
110813 15:38:50   311 Connect   test_user@192.168.14.129 on test  
                  312 Connect   test_user@192.168.14.129 on test  
                  313 Connect   test_user@192.168.14.129 on test  
110813 15:38:58   304 Query     SELECT @@SQL_MODE  
                    8 Query     BEGIN  
                    8 Query     insert into t_user values(8, 'n8', 'p8')  
                    8 Query     COMMIT /* implicit, from Xid_log_event */  
110813 15:39:08   304 Query     select * from t_user  
110813 15:39:19   304 Query     select * from t_message  
110813 15:44:08   314 Connect   test_user@192.168.14.129 on test  
                  315 Connect   test_user@192.168.14.129 on test  
                  316 Connect   test_user@192.168.14.129 on test  

从日志中可以看出。

在 Master  mysql.log 中,只执行了 insert into 命令。

在 Slave1 中,只是复制了 insert into 命令, 是主从复制的结果

在 Slave2 中, 复制了 insert into 命令,同时还执行了 select 命令。

 

说明,主从分离已经成功。

转自:http://www.iteye.com/topic/1113437

网友评论

登录后评论
0/500
评论
温酒斩华佗
+ 关注