利用 Docker 建立 MongoDB Replica Set 教學 利用 Docker 建立 MongoDB Replica Set 教學

Published on Sunday, March 26, 2023

Replica Set

今天來學習如何透過 docker compose 建立出一個 MongoDB Replica Set 這次將使用 play-with-docker 提供的 Instance 來建立我們環境 我們建立三台 Instance 來模擬三台虛擬機的情況

  • node1 192.168.0.8
  • node2 192.168.0.7
  • node3 192.168.0.6

首先先分別把這三台 Instance 的 IP 加入到各自 /etc/hosts 檔案內

#/etc/hosts
192.168.0.8    node1
192.168.0.7    node2
192.168.0.6    node3

建立完成後可以使用 ping 命令進行測試
接下來在這三台 Instance 建立 docker-compose.yaml 以及啟動 mongodb

mkdir -p mongo && cd $_
vi docker-compose.yml
# docker-compose.yml

services:
  mongo:
    image: mongo
    restart: always
    command: [ "mongod", "--bind_ip_all", "--replSet", "dbrs" ]
    network_mode: "host"

在上一篇的文章中,我們知道 mongodb Dockerfile 預設是運行 "docker-entrypoint.sh mongod" 在今天的設定檔中我們修改了預設的 command 額外添加了幾個參數 --bind_ip_all--replSet
我們先看一下官方預設的 mongod.conf 設定檔 Github,的兩個設定值

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.
  
#replication:

這邊 net 設定值只會監聽 127.0.0.1:27017,我們必須把IP 修改成 0.0.0.0:27017,一種作法是另外提供一個設定檔另一種是使用 bind_ip_all 參數也可以達到同樣的效果 並且預設沒有 replication 相關的設定,這裡我們也使用 --replSet 參數指定 Replica Set 的名稱

接下來我們分別在這三台 Instance 輸入命令 docker compose up -d 啟動 container

  • 0df0212f690b node1
  • 133a172f223d node2
  • 772fe1388842 node3

都正常啟動之後我們選擇 node1 做為 primary node,使用命令進入 node1 mongodb container 內

docker exec -it 0df0212f690b mongosh

使用 initiate 方法初始化 Replica Set

rs.initiate()

成功後會發現我們目前位於 primary 節點上

test> rs.initiate()
{
  info2: 'no configuration specified. Using a default configuration for the set',
  me: 'node1:27017',
  ok: 1
}
dbrs [direct: other] test> 

dbrs [direct: primary] test> 

不過目前並沒有其他節點我們還需要將其他兩台 Instance 添加進來

rs.add()
dbrs [direct: primary] test> rs.add("node2:27017")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1679838515, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1679838515, i: 1 })
}
dbrs [direct: primary] test> rs.add("node3:27017")
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1679838527, i: 1 }),
    signature: {
      hash: Binary(Buffer.from("0000000000000000000000000000000000000000", "hex"), 0),
      keyId: Long("0")
    }
  },
  operationTime: Timestamp({ t: 1679838527, i: 1 })
}

添加成功後我們可以是使用命令 rs.status() 檢查 Replica Set 的運作情況

dbrs [direct: primary] test> rs.status()
{
  set: 'dbrs',
  
  ...
  
  members: [
    {
      _id: 0,
      name: 'node1:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 455,
      optime: { ts: Timestamp({ t: 1679838570, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-03-26T13:49:30.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      lastDurableWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1679838349, i: 2 }),
      electionDate: ISODate("2023-03-26T13:45:49.000Z"),
      configVersion: 5,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: 'node2:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 60,
      optime: { ts: Timestamp({ t: 1679838570, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1679838570, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-03-26T13:49:30.000Z"),
      optimeDurableDate: ISODate("2023-03-26T13:49:30.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      lastDurableWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      lastHeartbeat: ISODate("2023-03-26T13:49:34.002Z"),
      lastHeartbeatRecv: ISODate("2023-03-26T13:49:33.999Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'node1:27017',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 5,
      configTerm: 1
    },
    {
      _id: 2,
      name: 'node3:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 47,
      optime: { ts: Timestamp({ t: 1679838570, i: 1 }), t: Long("1") },
      optimeDurable: { ts: Timestamp({ t: 1679838570, i: 1 }), t: Long("1") },
      optimeDate: ISODate("2023-03-26T13:49:30.000Z"),
      optimeDurableDate: ISODate("2023-03-26T13:49:30.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      lastDurableWallTime: ISODate("2023-03-26T13:49:30.004Z"),
      lastHeartbeat: ISODate("2023-03-26T13:49:34.002Z"),
      lastHeartbeatRecv: ISODate("2023-03-26T13:49:34.526Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'node2:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 1
    }
  ]
  
  ...
  
}

這邊確認 stateStr 的值顯示為 'PRIMARY' 和 'SECONDARY' 就代表成功了 我們嘗試在主節點上添加一筆資料,看看會不會自動複製資料

db.test.insertOne({"node": "node1"})

dbrs [direct: primary] test> db.test.insertOne({"node": "node1"})
{
  acknowledged: true,
  insertedId: ObjectId("64204eb44839d618edd828f1")
}

接下來我們到 node2 或 node3 上面看看資料

docker exec -it 133a172f223d mongosh

terminal 會顯示我們目前的節點為 secondary

dbrs [direct: secondary] test> 

如果我們這邊直接輸入 db.test.find() 會發生報錯

dbrs [direct: secondary] test> db.test.find()
MongoServerError: not primary and secondaryOk=false - consider using db.getMongo().setReadPref() or readPreference in the connection string

是因為 secondary 節點沒有開放讀取,所以我們需要先調整設定值

db.getMongo().setReadPref('primaryPreferred')

就可以在 secondary 節點讀取到我們剛剛插入的資料了

dbrs [direct: secondary] test> db.test.find()
[ { _id: ObjectId("64204eb44839d618edd828f1"), node: 'node1' } ]

最後我們回到 node1 ,直接將 node1 的 mongodb 服務關閉模擬異常情況,看看會不會進行自動轉移

[node1] (local) root@192.168.0.8 ~/mongo
$ docker compose down
[+] Running 1/1
 ⠿ Container mongo-mongo-1  Removed 

再回到 node2 使用命令 rs.status() 進行檢查

dbrs [direct: primary] test> rs.status()
{
  set: 'dbrs',
  
  ...
  
  },
  members: [
    {
      _id: 0,
      name: 'node1:27017',
      health: 0,
      state: 8,
      stateStr: '(not reachable/healthy)',
      uptime: 0,
      optime: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDurable: { ts: Timestamp({ t: 0, i: 0 }), t: Long("-1") },
      optimeDate: ISODate("1970-01-01T00:00:00.000Z"),
      optimeDurableDate: ISODate("1970-01-01T00:00:00.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T14:01:22.573Z"),
      lastDurableWallTime: ISODate("2023-03-26T14:01:22.573Z"),
      lastHeartbeat: ISODate("2023-03-26T14:01:54.652Z"),
      lastHeartbeatRecv: ISODate("2023-03-26T14:01:31.584Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: 'Error connecting to node1:27017 (192.168.0.8:27017) :: caused by :: Connection refused',
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    },
    {
      _id: 1,
      name: 'node2:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 1207,
      optime: { ts: Timestamp({ t: 1679839310, i: 4 }), t: Long("2") },
      optimeDate: ISODate("2023-03-26T14:01:50.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T14:01:50.405Z"),
      lastDurableWallTime: ISODate("2023-03-26T14:01:50.405Z"),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: '',
      electionTime: Timestamp({ t: 1679839282, i: 1 }),
      electionDate: ISODate("2023-03-26T14:01:22.000Z"),
      configVersion: 5,
      configTerm: 2,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 2,
      name: 'node3:27017',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 787,
      optime: { ts: Timestamp({ t: 1679839310, i: 4 }), t: Long("2") },
      optimeDurable: { ts: Timestamp({ t: 1679839310, i: 4 }), t: Long("2") },
      optimeDate: ISODate("2023-03-26T14:01:50.000Z"),
      optimeDurableDate: ISODate("2023-03-26T14:01:50.000Z"),
      lastAppliedWallTime: ISODate("2023-03-26T14:01:50.405Z"),
      lastDurableWallTime: ISODate("2023-03-26T14:01:50.405Z"),
      lastHeartbeat: ISODate("2023-03-26T14:01:54.595Z"),
      lastHeartbeatRecv: ISODate("2023-03-26T14:01:54.600Z"),
      pingMs: Long("0"),
      lastHeartbeatMessage: '',
      syncSourceHost: 'node2:27017',
      syncSourceId: 1,
      infoMessage: '',
      configVersion: 5,
      configTerm: 2
    }
  ],
  
  ...
  
}

發現我們的 node1 目前狀態為 (not reachable/healthy),並且主節點已經成功切換到 node2 上了


Summary

今天測試了如何在 mongodb 建立 Replica Set,增加資料的安全性避免單一節點故障導致服務下線或者資料遺失