MongoDB分片搭建

本文涉及的产品
云数据库 MongoDB,通用型 2核4GB
简介: MongoDB分片搭建

一、环境

$ cat /etc/redhat-release 
CentOS Linux release 7.0.1406 (Core) 
$ uname -a
Linux zhaopin-2-201 3.10.0-123.el7.x86_64 #1 SMP Mon Jun 30 12:09:22 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
$ mongo --version
MongoDB shell version: 3.0.6

node1: 172.30.2.201
node2: 172.30.2.202
node3: 172.30.2.203

二、配置Shard Server

在3个节点分别执行:

  1. 创建目录
$ sudo mkdir -p /data/mongodb/{data/{sh0,sh1},backup/{sh0,sh1},log/{sh0,sh1},conf/{sh0,sh1}}

2.准备配置文件

第一个分片:

$ sudo vim /data/mongodb/conf/sh0/mongodb.conf
# base
port = 27010
maxConns = 800 
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/sh0
pidfilepath = /data/mongodb/data/sh0/mongodb.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log 
logpath = /data/mongodb/log/sh0/mongodb.log
logRotate = rename
logappend = true
slowms = 50
replSet = sh0 
shardsvr = true

第二个分片:

$ sudo vim /data/mongodb/conf/sh1/mongodb.conf
# base
port = 27011
maxConns = 800 
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/sh1
pidfilepath = /data/mongodb/data/sh1/mongodb.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log
logpath = /data/mongodb/log/sh1/mongodb.log
logRotate = rename
logappend = true
slowms = 50
replSet = sh1
shardsvr = true

3.启动Shard Server

$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41492
child process started successfully, parent exiting
$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41509
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root     41492  0.5  0.0 518016 54604 ?        Sl   10:09   0:00 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root     41509  0.5  0.0 516988 51824 ?        Sl   10:09   0:00 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
$ mongo --port 27010
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27010/test
>
bye
$ mongo --port 27011
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27011/test
>
bye

三、配置Config Server

在3个节点分别执行:
1.创建目录

$ sudo mkdir -p /data/mongodb/{data/cf0,backup/cf0,log/cf0,conf/cf0}

2.准备配置文件

$ sudo vim /data/mongodb/conf/cf0/config.conf
# base
port = 27000
maxConns = 800
filePermissions = 0700
fork = true
noauth = true
directoryperdb = true
dbpath = /data/mongodb/data/cf0
pidfilepath = /data/mongodb/data/cf0/config.pid
oplogSize = 10
journal = true
# security
nohttpinterface = true
rest = false
# log
logpath = /data/mongodb/log/cf0/config.log
logRotate = rename
logappend = true
slowms = 50
configsvr = true

3.启动

$ sudo /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
about to fork child process, waiting until server is ready for connections.
forked process: 41759
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root     41492  0.3  0.0 518016 54728 ?        Sl   10:09   0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root     41509  0.3  0.0 518016 54760 ?        Sl   10:09   0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
root     41855  0.4  0.0 467828 51684 ?        Sl   10:25   0:03 /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf

四、配置Query Routers

在3个节点分别执行:

1.创建目录

$ sudo mkdir -p /data/mongodb/{data/ms0,backup/ms0,log/ms0,conf/ms0}

2.准备配置文件

$ sudo vim /data/mongodb/conf/ms0/mongos.conf
# base
port = 30000
maxConns = 800
filePermissions = 0700
fork = true
pidfilepath = /data/mongodb/data/ms0/mongos.pid
# log
logpath = /data/mongodb/log/ms0/mongos.log
logRotate = rename
logappend = true
configdb = 172.30.2.201:27000,172.30.2.202:27000,172.30.2.203:27000

3.启动

$ sudo /opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf
about to fork child process, waiting until server is ready for connections.
forked process: 42233
child process started successfully, parent exiting
$ ps aux | grep mongo | grep -v grep
root     41492  0.3  0.0 518016 54728 ?        Sl   10:09   0:06 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
root     41509  0.3  0.0 518016 54760 ?        Sl   10:09   0:07 /opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
root     41855  0.4  0.0 546724 37812 ?        Sl   10:25   0:03 /opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
root     41870  0.4  0.0 546724 38188 ?        Sl   10:25   0:03 /opt/mongodb/bin/mongod --conf
root     42233  0.5  0.0 233536 10188 ?        Sl   10:38   0:00 /opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf

五、初始化副本集

配置副本集的好处是为了高可用,配置单节点是我自己为了节省时间,后续添加节点和副本集的操作一样,分片的配置不需要修改,在任何一个节点执行,这里在node1上执行

分片一:

$ mongo --port 27010
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27010/test
> use admin
switched to db admin
> cfg={_id:"sh0", members:[ {_id:0,host:"172.30.2.201:27010"}, {_id:1,host:"172.30.2.202:27010"}, {_id:2,host:"172.30.2.203:27010"} ] }
{
        "_id" : "sh0",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.30.2.201:27010"
                },
                {
                        "_id" : 1,
                        "host" : "172.30.2.202:27010"
                },
                {
                        "_id" : 2,
                        "host" : "172.30.2.203:27010"
                }
        ]
}
> rs.initiate( cfg );
{ "ok" : 1 }
sh0:OTHER> rs.status()
{
        "set" : "sh0",
        "date" : ISODate("2015-10-23T05:33:31.920Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.30.2.201:27010",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 270,
                        "optime" : Timestamp(1445578404, 1),
                        "optimeDate" : ISODate("2015-10-23T05:33:24Z"),
                        "electionTime" : Timestamp(1445578408, 1),
                        "electionDate" : ISODate("2015-10-23T05:33:28Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "172.30.2.202:27010",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 7,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-23T05:33:30.289Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-23T05:33:30.295Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "172.30.2.203:27010",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 7,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-23T05:33:30.289Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-23T05:33:30.293Z"),
                        "pingMs" : 1,
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
sh0:PRIMARY>
bye

分片二:

$ mongo --port 27011
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:27011/test
> use admin
switched to db admin
> cfg={_id:"sh1", members:[ {_id:0,host:"172.30.2.201:27011"}, {_id:1,host:"172.30.2.202:27011"}, {_id:2,host:"172.30.2.203:27011"} ] }
{
        "_id" : "sh1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "172.30.2.201:27011"
                },
                {
                        "_id" : 1,
                        "host" : "172.30.2.202:27011"
                },
                {
                        "_id" : 2,
                        "host" : "172.30.2.203:27011"
                }
        ]
}
> rs.initiate( cfg );
{ "ok" : 1 }
sh1:OTHER> rs.status();
{
        "set" : "sh1",
        "date" : ISODate("2015-10-23T05:36:02.365Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "172.30.2.201:27011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 406,
                        "optime" : Timestamp(1445578557, 1),
                        "optimeDate" : ISODate("2015-10-23T05:35:57Z"),
                        "electionTime" : Timestamp(1445578561, 1),
                        "electionDate" : ISODate("2015-10-23T05:36:01Z"),
                        "configVersion" : 1,
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "172.30.2.202:27011",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 5,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-23T05:36:01.168Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-23T05:36:01.175Z"),
                        "pingMs" : 0,
                        "configVersion" : 1
                },
                {
                        "_id" : 2,
                        "name" : "172.30.2.203:27011",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 5,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-10-23T05:36:01.167Z"),
                        "lastHeartbeatRecv" : ISODate("2015-10-23T05:36:01.172Z"),
                        "pingMs" : 0,
                        "configVersion" : 1
                }
        ],
        "ok" : 1
}
sh1:PRIMARY>
bye

六、配置分片

$ mongo --port 30000
MongoDB shell version: 3.0.6
connecting to: 127.0.0.1:30000/test
mongos> use admin;
switched to db admin
mongos> sh.addShard("sh0/172.30.2.201:27010,172.30.2.202:27010,172.30.2.203:27010");
{ "shardAdded" : "sh0", "ok" : 1 }
mongos> sh.addShard("sh1/172.30.2.201:27011,172.30.2.202:27011,172.30.2.203:27011");
{ "shardAdded" : "sh1", "ok" : 1 }
mongos> use mydb;
switched to db mydb
mongos> db.createCollection("test");
{
        "ok" : 1,
        "$gleStats" : {
                "lastOpTime" : Timestamp(1444358911, 1),
                "electionId" : ObjectId("56172a4bc03d9b1667f8e928")
        }
}
mongos> sh.enableSharding("mydb");
{ "ok" : 1 }
mongos> sh.shardCollection("mydb.test", {"_id":1});
{ "collectionsharded" : "mydb.test", "ok" : 1 }
mongos> sh.status();
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "minCompatibleVersion" : 5,
        "currentVersion" : 6,
        "clusterId" : ObjectId("561728b4030ea038bcb57fa0")
}
  shards:
        {  "_id" : "sh0",  "host" : "sh0/172.30.2.201:27010,172.30.2.202:27010,172.30.2.203:27010" }
        {  "_id" : "sh1",  "host" : "sh1/172.30.2.201:27011,172.30.2.202:27011,172.30.2.203:27011" }
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "mydb",  "partitioned" : true,  "primary" : "sh0" }
                mydb.test
                        shard key: { "_id" : 1 }
                        chunks:
                                sh0     1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : sh0 Timestamp(1, 0)

可见分片已经配置完成了

七、添加开机启动项

$ sudo vim /etc/rc.local
ulimit -SHn 65535
/opt/mongodb/bin/mongod --config /data/mongodb/conf/sh0/mongodb.conf
/opt/mongodb/bin/mongod --config /data/mongodb/conf/sh1/mongodb.conf
/opt/mongodb/bin/mongod --config /data/mongodb/conf/cf0/config.conf
/opt/mongodb/bin/mongos --config /data/mongodb/conf/ms0/mongos.conf

八、备注

虽然也是3台机器,使用分片的好处是可以把两个分片的primary设置在不同的节点,这个可以分摊单节点的压力,当然有更多机器就可以把分片放到不同机器上。

文章转载自 开源中国社区[https://www.oschina.net]

相关实践学习
MongoDB数据库入门
MongoDB数据库入门实验。
快速掌握 MongoDB 数据库
本课程主要讲解MongoDB数据库的基本知识,包括MongoDB数据库的安装、配置、服务的启动、数据的CRUD操作函数使用、MongoDB索引的使用(唯一索引、地理索引、过期索引、全文索引等)、MapReduce操作实现、用户管理、Java对MongoDB的操作支持(基于2.x驱动与3.x驱动的完全讲解)。 通过学习此课程,读者将具备MongoDB数据库的开发能力,并且能够使用MongoDB进行项目开发。   相关的阿里云产品:云数据库 MongoDB版 云数据库MongoDB版支持ReplicaSet和Sharding两种部署架构,具备安全审计,时间点备份等多项企业能力。在互联网、物联网、游戏、金融等领域被广泛采用。 云数据库MongoDB版(ApsaraDB for MongoDB)完全兼容MongoDB协议,基于飞天分布式系统和高可靠存储引擎,提供多节点高可用架构、弹性扩容、容灾、备份回滚、性能优化等解决方案。 产品详情: https://www.aliyun.com/product/mongodb
相关文章
|
6月前
|
运维 NoSQL 安全
【最佳实践】高可用mongodb集群(1分片+3副本):规划及部署
结合我们的生产需求,本次详细整理了最新版本 MonogoDB 7.0 集群的规划及部署过程,具有较大的参考价值,基本可照搬使用。 适应数据规模为T级的场景,由于设计了分片支撑,后续如有大数据量需求,可分片横向扩展。
485 1
|
1月前
|
NoSQL MongoDB
搭建MongoDB分片式集群
搭建MongoDB分片式集群
13 0
|
6月前
|
NoSQL MongoDB
MongoDB分片+副本集高可用集群的启停步骤
MongoDB分片+副本集高可用集群的启停步骤
139 0
|
8月前
|
存储 NoSQL MongoDB
MongoDB-分片片键
?> 那么紧接着上一篇的文章内容,如何将数据存储到不同的分片服务器上的? 答:通过分片片键
41 0
|
7月前
|
存储 NoSQL MongoDB
MongoDB分片教程
MongoDB分片教程
203 0
|
8月前
|
NoSQL MongoDB 数据库
MongoDB-分片集群搭建
搭建配置服务器复制集: • 早期版本的配置服务器只要一台即可 • 最新版本 MongoDB 要求配置服务器必须是一个复制集
190 0
|
8月前
MongoDB-分片查询
用户的请求会发送给 mongos 路由服务器, 路由服务器会根据查询条件去配置服务器查询对应的数据段和属于哪个分片服务器, 如果用户查询的条件是分片片键字段, 那么路由服务器会返回保存在那一台分片服务器上, 路由服务器就会去对应的分片服务器获取数据, 并将取到的数据返回给用户。
104 0
|
8月前
|
NoSQL MongoDB
MongoDB-分片优化
分片的主要目的就是将数据分配到不同的服务器中保存, 提升服务器的容量, 让数据更加的均衡, 更有效的降低服务器的压力, 但是随着时间推移, 某些数据段中保存的数据会越来越多, 所以为了保证个分片均衡, 当某个数据段数据过多或体积过大的时候, 系统就会自动在下一次操作这个数据段时(新增/更新), 将一个大的数据段分裂成多个小的数据段。
124 0
|
8月前
MongoDB-分片结构
分片集群结构 • 分片服务器: 用于保存集合中的一部分数据 • 配置服务器: 用于保存分片数据的数据段和数据范围 • mongos 路由(路由服务器): 用于分发请求到保存对应数据的分片服务器上
55 0
|
8月前
|
存储 NoSQL MongoDB
MongoDB-分片开篇
什么是复制集 ‘多台’,‘保存了相同数据’ 的MongoDB服务器组成
86 0