Redis学习笔记—主从复制

在分布式系统中为了解决单点问题,通常会把数据复制多个副本部署到其他机器,满足故障恢复和负载均衡等需求。Redis也是如此,它为我们提供了复制功能,实现了相同数据的多个Redis副本。复制功能是高可用Redis的基础

建立复制

参与复制的Redis实例分为主节点(master)和从节点(slave)。每个从节点只能有一个主节点,主节点可以有多个从节点,复制的数据流是单向的,只能由主节点复制到从节点,有以下三种方式:

  1. 在配置文件中加入slaveof{masterHost}{masterPort}随Redis启动生效
  2. 在redis-server启动命令后加入–slaveof{masterHost}{masterPort}生效
  3. 直接使用命令:slaveof{masterHost}{masterPort}生效

在本地开启两个不同端口的服务,分别是默认的6379端口和自己启动的6666端口,在6666端口的服务中开启复制:

1
2
3
127.0.0.1:6666> slaveof 127.0.0.1 6379
....
Finished with success

此时先看一个6666端口实例key为myname的值是没有的

1
2
127.0.0.1:6666> get myname
(nil)

在主节点6379实例进行插入操作

1
2
127.0.0.1:6379> set myname Charlie
OK

这时候再看6666端口实例中,值已经复制过来了

1
2
127.0.0.1:6666> get myname
"Charlie"

通过info查看主从信息

主节点6379端口

1
2
3
4
5
6
7
8
9
10
11
12
13
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=6666,state=online,offset=957,lag=0
master_replid:3e32aa6882ae57742f8d12bd9eb3c530c3ff5a74
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:957
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:957

从节点6666端口

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:6
master_sync_in_progress:0
slave_repl_offset:915
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:3e32aa6882ae57742f8d12bd9eb3c530c3ff5a74
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:915
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:113
repl_backlog_histlen:803

断开复制

1
slaveof no one

断开6666的从节点,这时

1
2
3
127.0.0.1:6666> slaveof no one
···
OK

这时候info replication命令查看会发现状态由从节点变成主节点了

1
2
3
4
5
6
7
8
9
10
11
12
127.0.0.1:6666> info replication
# Replication
role:master
connected_slaves:0
master_replid:f15dd49506a152bd5fbbeaab728314e7cc62fc15
master_replid2:3e32aa6882ae57742f8d12bd9eb3c530c3ff5a74
master_repl_offset:1279
second_repl_offset:1280
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:113
repl_backlog_histlen:1167

安全性

对于数据比较重要的节点,主节点会通过设置requirepass参数进行密码验证,这时所有的客户端访问必须使用auth命令实行校验。从节点与主节点的复制连接是通过一个特殊标识的客户端来完成,因此需要配置从节点的masterauth参数与主节点密码保持一致,这样从节点才可以正确地连接到主节点并发起复制流程

只读

replica-serve-stale-data yes默认从节点为只读模式,保证数据一致性

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# When a replica loses its connection with the master, or when the replication
# is still in progress, the replica can act in two different ways:
#
# 1) if replica-serve-stale-data is set to 'yes' (the default) the replica will
# still reply to client requests, possibly with out of date data, or the
# data set may just be empty if this is the first synchronization.
#
# 2) if replica-serve-stale-data is set to 'no' the replica will reply with
# an error "SYNC with master in progress" to all the kind of commands
# but to INFO, replicaOF, AUTH, PING, SHUTDOWN, REPLCONF, ROLE, CONFIG,
# SUBSCRIBE, UNSUBSCRIBE, PSUBSCRIBE, PUNSUBSCRIBE, PUBLISH, PUBSUB,
# COMMAND, POST, HOST: and LATENCY.
#
replica-serve-stale-data yes

传输延迟

主从节点往往不在同一个服务器上,需要通过网络进行复制,这时候网络延迟会成为一个不稳定因素,Redis为我们提供了repl-disable-tcp-nodelay参数用于控制是否关闭TCP_NODELAY,默认关闭

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Disable TCP_NODELAY on the replica socket after SYNC?
#
# If you select "yes" Redis will use a smaller number of TCP packets and
# less bandwidth to send data to replicas. But this can add a delay for
# the data to appear on the replica side, up to 40 milliseconds with
# Linux kernels using a default configuration.
#
# If you select "no" the delay for data to appear on the replica side will
# be reduced but more bandwidth will be used for replication.
#
# By default we optimize for low latency, but in very high traffic conditions
# or when the master and replicas are many hops away, turning this to "yes" may
# be a good idea.
repl-disable-tcp-nodelay no
  • 当关闭时,主节点产生的命令数据无论大小都会及时地发送给从节点,这样主从之间延迟会变小,但增加了网络带宽的消耗。适用于主从之间的网络环境良好的场景,如同机架或同机房部署
  • 当开启时,主节点会合并较小的TCP数据包从而节省带宽。默认发送时间间隔取决于Linux的内核,一般默认为40毫秒。这种配置节省了带宽但增大主从之间的延迟。适用于主从网络环境复杂或带宽紧张的场景,如跨机房部署