Simulate the installation of a replica set of MongoDB under a Centos
Prepare three profiles:
mongod.conf
bind_ip=0.0.0.0 port = 27017 dbpath = /usr/local/mongo/data/ logpath = /usr/local/mongo/log/mongod.log fork=true logappend=true replSet=myMongoSet
mongod2.conf
bind_ip=0.0.0.0 port = 27018 dbpath = /usr/local/mongo/data2/ logpath = /usr/local/mongo/log2/mongod.log fork=true logappend=true replSet=myMongoSet
mongod3.conf
bind_ip=0.0.0.0 port = 27019 dbpath = /usr/local/mongo/data3/ logpath = /usr/local/mongo/log3/mongod.log fork=true logappend=true replSet=myMongoSet
Start three mongo instances in the bin directory
./mongod -f ../conf/mongod.conf ./mongod -f ../conf/mongod2.conf ./mongod -f ../conf/mongod3.conf
View the process to verify that three MongoDB instances started successfully
[root@192 conf]# ps -ef | grep mongod root 559 130632 0 14:53 pts/1 00:00:00 grep --color=auto mongod root 130957 1 0 14:37 ? 00:00:04 ./mongod -f ../conf/mongod.conf root 130986 1 0 14:37 ? 00:00:04 ./mongod -f ../conf/mongod2.conf root 131014 1 0 14:37 ? 00:00:04 ./mongod -f ../conf/mongod3.conf
Indicates that three MongoDB instances have been started successfully
Connect the first mongo
./mongo 192.168.15.31:27017/admin
Preparing to initialize the instance
> config={_id:"myMongoSet",members:[{_id:0,host:"192.168.15.31:27017"},{_id:1,host:"192.168.15.31:27018"},{_id:2,host:"192.168.15.31:27019"}]} { "_id" : "myMongoSet", "members" : [ { "_id" : 0, "host" : "192.168.15.31:27017" }, { "_id" : 1, "host" : "192.168.15.31:27018" }, { "_id" : 2, "host" : "192.168.15.31:27019" } ] }
Define config content as instance information for the cluster
> rs.initiate(config)
Initialize replica set
The output after execution is as follows:
{ "ok" : 1, "operationTime" : Timestamp(1517640358, 1), "$clusterTime" : { "clusterTime" : Timestamp(1517640358, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
Describes MongoDB's replica set creation success
View replica set status
myMongoSet:SECONDARY> rs.status() { "set" : "myMongoSet", "date" : ISODate("2018-02-03T06:46:09.449Z"), "myState" : 1, "term" : NumberLong(1), "heartbeatIntervalMillis" : NumberLong(2000), "optimes" : { "lastCommittedOpTime" : { "ts" : Timestamp(0, 0), "t" : NumberLong(-1) }, "appliedOpTime" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "durableOpTime" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) } }, "members" : [ { "_id" : 0, "name" : "192.168.15.31:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 546, "optime" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-02-03T06:45:58Z"), "infoMessage" : "could not find member to sync from", "electionTime" : Timestamp(1517640368, 1), "electionDate" : ISODate("2018-02-03T06:46:08Z"), "configVersion" : 1, "self" : true }, { "_id" : 1, "name" : "192.168.15.31:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-02-03T06:45:58Z"), "optimeDurableDate" : ISODate("2018-02-03T06:45:58Z"), "lastHeartbeat" : ISODate("2018-02-03T06:46:08.931Z"), "lastHeartbeatRecv" : ISODate("2018-02-03T06:46:05.733Z"), "pingMs" : NumberLong(0), "configVersion" : 1 }, { "_id" : 2, "name" : "192.168.15.31:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 10, "optime" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "optimeDurable" : { "ts" : Timestamp(1517640358, 1), "t" : NumberLong(-1) }, "optimeDate" : ISODate("2018-02-03T06:45:58Z"), "optimeDurableDate" : ISODate("2018-02-03T06:45:58Z"), "lastHeartbeat" : ISODate("2018-02-03T06:46:08.932Z"), "lastHeartbeatRecv" : ISODate("2018-02-03T06:46:05.734Z"), "pingMs" : NumberLong(0), "configVersion" : 1 } ], "ok" : 1, "operationTime" : Timestamp(1517640358, 1), "$clusterTime" : { "clusterTime" : Timestamp(1517640368, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
Verify data synchronization:
Connect 27017 instances:
myMongoSet:PRIMARY>use stu myMongoSet:PRIMARY> db.stuinfo.insert({"name":"Zhang San","age":12,"address":"Shandong"})
Query adds:
myMongoSet:PRIMARY> db.stuinfo.find() { "_id" : ObjectId("5a755b6a4dfb8ddaa17bad20"), "name" : "Zhang San", "age" : 12, "address" : "Shandong" }
Enter 27018 instances:
myMongoSet:SECONDARY> db.stuinfo.find() Error: error: { "operationTime" : Timestamp(1517641210, 1), "ok" : 0, "errmsg" : "not master and slaveOk=false", "code" : 13435, "codeName" : "NotMasterNoSlaveOk", "$clusterTime" : { "clusterTime" : Timestamp(1517641210, 1), "signature" : { "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="), "keyId" : NumberLong(0) } } }
Viewing the information may cause the above problem because MongoDB's salve defaults to not allow reading and writing. Solution:
myMongoSet:SECONDARY> rs.slaveOk()
See if the information is synchronized:
myMongoSet:SECONDARY> use stu switched to db stu myMongoSet:SECONDARY> db.stuinfo.find() { "_id" : ObjectId("5a755b6a4dfb8ddaa17bad20"), "name" : "Zhang San", "age" : 12, "address" : "Shandong" }
Connect instance 27019 and do the same. Thus, the data information can be synchronized correctly. So far, MongoDB's replica set has been successfully set up
Verify master-slave switching:
Now the primary node is 27017, we kill the process
[root@192 bin]# ps -ef | grep mongod root 659 130632 0 15:05 pts/1 00:00:00 grep --color=auto mongod root 130957 1 0 14:37 ? 00:00:08 ./mongod -f ../conf/mongod.conf root 130986 1 0 14:37 ? 00:00:08 ./mongod -f ../conf/mongod2.conf root 131014 1 0 14:37 ? 00:00:08 ./mongod -f ../conf/mongod3.conf [root@192 bin]# kill 130957 [root@192 bin]# ps -ef | grep mongod root 674 130632 0 15:05 pts/1 00:00:00 grep --color=auto mongod root 130986 1 0 14:37 ? 00:00:08 ./mongod -f ../conf/mongod2.conf root 131014 1 0 14:37 ? 00:00:08 ./mongod -f ../conf/mongod3.conf
The process has been killed
Then connect 27018 nodes
[root@192 bin]# ./mongo 192.168.15.31:27018 MongoDB shell version v3.6.2 connecting to: mongodb://192.168.15.31:27018/test MongoDB server version: 3.6.2 Server has startup warnings: 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-03T14:37:09.007+0800 I CONTROL [initandlisten] myMongoSet:PRIMARY>
As you can see, 27018 node has become the primary node
Then reopen 27017 nodes
[root@192 bin]# ./mongod -f ../conf/mongod.conf about to fork child process, waiting until server is ready for connections. forked process: 703 child process started successfully, parent exiting
Connect 27017 Nodes
[root@192 bin]# ./mongo 192.168.15.31:27017 MongoDB shell version v3.6.2 connecting to: mongodb://192.168.15.31:27017/test MongoDB server version: 3.6.2 Server has startup warnings: 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** WARNING: Access control is not enabled for the database. 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** Read and write access to data and configuration is unrestricted. 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** WARNING: You are running this process as the root user, which is not recommended. 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2018-02-03T15:07:54.488+0800 I CONTROL [initandlisten] myMongoSet:SECONDARY>
27017 Node has become a slave node