伴隨著Redis6.0的發佈,作為最令人怦然心動的特性之一,Redis官方同時推出Redis集群的proxy了:redis-cluster-proxy,https://github.com/RedisLabs/redis-cluster-proxy 相比從前訪問Redis集群時需要制定集群中所有的I ...
伴隨著Redis6.0的發佈,作為最令人怦然心動的特性之一,Redis官方同時推出Redis集群的proxy了:redis-cluster-proxy,https://github.com/RedisLabs/redis-cluster-proxy 相比從前訪問Redis集群時需要制定集群中所有的IP節點相比: 1,redis的redis-cluster-proxy實現了redis cluster集群節點的代理(屏蔽),類似於VIP但又比VIP簡單,客戶端不需要知道集群中的具體節點個數和主從身份,可以直接通過代理訪問集群。 2,不僅如此,還是具有一些非常實用的改進,比如在redis集群模式下,增加了對multiple操作的支持,跨slot操作等等(有點關係資料庫的分庫分表中間件的感覺)。
redis-cluster-proxy主要特性 以下信息來自於官方的說明: redis-cluster-proxy是Redis集群的代理。Redis能夠在基於自動故障轉移和分片的集群模式下運行。 這種特殊模式(指Redis集群模式)需要使用特殊的客戶端來理解集群協議:通過代理,集群被抽象了出來,可以實現像單實例一樣實現redis集群的訪問。 Redis集群代理是多線程的,預設情況下,它目前使用多路復用通信模型,這樣每個線程都有自己的集群連接,所有屬於線程本身的客戶端都可以共用該連接。 無論如何,在某些特殊情況下(多事務或阻塞命令),多路復用被禁用,客戶端將擁有自己的集群連接。 通過這種方式,只發送簡單命令(比如get和set)的客戶端將不需要一組到Redis集群的私有連接。
Redis集群代理的主要特點如下: 1,自動化路由:每個查詢被自動路由到集群的正確節點 2,多線程(它目前使用多路復用通信模型,這樣每個線程都有自己的集群連接) 3,支持多路復用和私有連接模型 4,即使在多路復用上下文中,查詢執行和應答順序也是有保證的 5,在請求/重定向錯誤後自動更新集群配置:當這些類型的錯誤發生在應答中時,代理通過獲取集群的更新配置並重新映射所有slot,自動更新集群的內部表示。 所有查詢將在更新完成後重新執行,因此,從客戶機的角度來看,一切都將正常運行(客戶機不會收到ASK|重定向錯誤:在更新集群配置之後,它們將直接收到預期的結果)。 6,跨slot/跨節點查詢:支持許多涉及屬於不同slot(甚至不同集群節點)的多個鍵的命令。 這些命令將把查詢分成多個查詢,這些查詢將被路由到不同的槽/節點。 這些命令的應答處理是特定於命令的。有些命令,如MGET,將合併所有應答,就好像它們是單個應答一樣。 其他命令(如MSET或DEL)將彙總所有應答的結果。由於這些查詢實際上破壞了命令的原子性,所以它們的使用是可選的(預設情況下禁用)。 7,一些沒有特定節點/slot的命令(如DBSIZE)被傳遞給所有節點,為了給出所有應答中包含的所有值的和,應答將被映射簡化。 8,可用於執行某些特定於代理的操作的附加代理命令。
Redis 6.0以及redis-cluster-proxy gcc 5+編譯環境依賴 Redis 6.0以及redis-cluster-proxy的編譯依賴於gcc 5+,centos 7上的預設gcc版本是4.+,無法滿足編譯要求,在編譯時候會出現類似如下的錯誤 server.h:1022:5: error: expected specifier-qualifier-list before '_Atomic
類似錯誤參考這裡:https://wanghenshui.github.io/2019/12/31/redis-ce 解決方案參考,筆者環境為centos7,為此折騰了小半天
1,https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos,測試可行 2,https://blog.csdn.net/displayMessage/article/details/85602701 gcc源碼包編譯安裝,120MB的源碼包,有人說是需要40分鐘,筆者機器上編譯超過了1個小時仍未果,因此採用的是上一種方法
Redis集群環境搭建 測試環境拓撲圖,如下所示,基於docker的3主3從6個節點的redis cluster集群
redis cluster 集群信息,參考之前的文章,redis cluster 自動化安裝、擴容和縮容,快速實現Redis集群搭建
cd redis-cluster-proxy 2,解決gcc版本依賴問題,筆者折騰了好久,gcc 5.0+ 源碼包編譯安裝花了一個多小時未果。
後來嘗試如下這種方法可行,參考https://stackoverflow.com/questions/55345373/how-to-install-gcc-g-8-on-centos
On CentOS 7, you can install GCC 8 from Developer Toolset. First you need to enable the Software Collections repository: yum install centos-release-scl Then you can install GCC 8 and its C++ compiler: yum install devtoolset-8-gcc devtoolset-8-gcc-c++ To switch to a shell which defaults gcc and g++ to this GCC version, use: scl enable devtoolset-8 -- bash You need to wrap all commands under the scl call, so that the process environment changes performed by this command affect all subshells. For example, you could use the scl command to invoke a shell script that performs the required actions.3,make PREFIX=/usr/local/redis_cluster_proxy install 4,關於rediscluster-proxy配置文件 啟動的時候可以直接在命令行中指定參數,但最好是使用配置文件模式啟動,配置文件中的節點如下,很清爽,註釋也很清晰,簡單備註了一下,期待發現更多的新特性。
# Redis Cluster Proxy configuration file example.
# 如果指定以配置文件的方式啟動,必須指定-c 參數
# ./redis-cluster-proxy -c /path/to/proxy.conf
################################## INCLUDES ###################################
# Include one or more other config files here. Include files can include other files.
# 指定配置文件的路徑
# If instead you are interested in using includes to override configuration options, it is better to use include as the last line.
# include /path/to/local.conf
# include /path/to/other.conf
######################## CLUSTER ENTRY POINT ADDRESS ##########################
# Indicate the entry point address in the same way it can be indicated in the
# redis cluster集群自身節點信息,這裡是3主3從的6個節點,分別是192.168.0.61~192.168.0.66
# redis-cluster-proxy command line arguments.
# Note that it can be overridden by the command line argument itself.
# You can also specify multiple entry-points, by adding more lines, ie:
# cluster 127.0.0.1:7000
# cluster 127.0.0.1:7001
# You can also use the "entry-point" alias instead of cluster, ie:
# entry-point 127.0.0.1:7000
#
# cluster 127.0.0.1:7000
cluster 192.168.0.61:8888
cluster 192.168.0.62:8888
cluster 192.168.0.63:8888
cluster 192.168.0.64:8888
cluster 192.168.0.65:8888
cluster 192.168.0.66:8888
################################### MAIN ######################################
# Set the port used by Redis Cluster Proxy to listen to incoming connections
# redis-cluster-proxy 埠號指定
# from clients (default 7777)
port 7777
# IP地址綁定,這裡指定為redis-proxy-cluster所在節點的IP地址
# If you want you can bind a single interface, if the bind option is not
# specified all the interfaces will listen for incoming connections.
# You can also bind on multiple interfaces by declaring bind on multiple lines
#
# bind 127.0.0.1
bind 192.168.0.12
# socket 文件路徑
# Specify the path for the Unix socket that will be used to listen for
# incoming connections. There is no default, so Redis Cluster Proxy won't
# listen on a Unix socket when not specified.
#
# unixsocket /path/to/proxy.socket
# Set the Unix socket file permissions (default 0)
#
# unixsocketperm 760
# 線程數量
# Set the number of threads.
threads 8
# Set the TCP keep-alive value on the Redis Cluster Proxy's socket
#
# tcpkeepalive 300
# Set the TCP backlog on the Redis Cluster Proxy's socket
#
# tcp-backlog 511
# 連接池信息
# Size of the connections pool used to provide ready-to-use sockets to
# private connections. The number (size) indicates the number of starting
# connections in the pool.
# Use 0 to disable connections pool at all.
# Every thread will have its pool of ready-to-use connections.
# When the proxy starts, every thread will populate a pool containing
# connections to all the nodes of the cluster.
# Whenever a client needs a private connection, it can take a connection
# from the pool, if available. This will speed-up the client transition from
# the thread's shared connection to its own private connection, since the
# connection from the thread's pool should be already connected and
# ready-to-use. Otherwise, clients with priovate connections must re-connect
# the the nodes of the cluster (this re-connection will act in a 'lazy' way).
#
# connections-pool-size 10
# Minimum number of connections in the the pool. Below this value, the
# thread will start re-spawning connections at the defined rate until
# the pool will be full again.
#
# connections-pool-min-size 10
# Interval in milliseconds used to re-spawn connections in the pool.
# Whenever the number of connections in the pool drops below the minimum
# (see 'connections-pool-min-size' above), the thread will start
# re-spawing connections in the pool, until the pool will be full again.
# New connections will be added at this specified interval.
#
# connections-pool-spawn-every 50
# Number of connections to re-spawn in the pool at every cycle that will
# happen with an interval defined by 'connections-pool-spawn-every' (see above).
#
# connections-pool-spawn-rate 50
# 運行模式,一開始最好指定為no,運行時直接列印出來啟動日誌或者異常信息,這樣可以方便地查看啟動異常
# 非常奇怪的是:筆者一開始指定為yes,異常日誌輸出到文件,竟然跟直接列印日誌輸出的信息不一致
# Run Redis Cluster Proxy as a daemon.
daemonize yes
# pid 文件指定
# If a pid file is specified, the proxy writes it where specified at startup
# and removes it at exit.
#
# When the proxy runs non daemonized, no pid file is created if none is
# specified in the configuration. When the proxy is daemonized, the pid file
# is used even if not specified, defaulting to
# "/var/run/redis-cluster-proxy.pid".
#
# Creating a pid file is best effort: if the proxy is not able to create it
# nothing bad happens, the server will start and run normally.
#
#pidfile /var/run/redis-cluster-proxy.pid
# 日誌文件指定,如果可以正常啟動,強烈建議指定一個輸出日誌文件,所有的運行異常或者錯誤都可以從日誌中查找
# Specify the log file name. Also the empty string can be used to force
# Redis Cluster Porxy to log on the standard output. Note that if you use
# standard output for logging but daemonize, logs will be sent to /dev/null
#
#logfile ""
logfile "/usr/local/redis_cluster_proxy/redis_cluster_proxy.log"
# 跨slot操作,這裡設置為yes,允許
# Enable cross-slot queries that can use multiple keys belonging to different
# slots or even different nodes.
# WARN: these queries will break the the atomicity deisgn of many Redis
# commands.
# NOTE: cross-slots queries are not supported by all the commands, even if
# this feature is enabled
#
# enable-cross-slot no
enable-cross-slot yes
# Maximum number of clients allowed
#
# max-clients 10000
# 連接到redis cluster時候的身份認證,如果redis集群節點設置了身份認證的話,強烈建議redis集群所有節點設置一個統一的auth
# Authentication password used to authenticate on the cluster in case its nodes
# are password-protected. The password will be used both for fetching cluster's
# configuration and to automatically authenticate proxy's internal connections
# to the cluster itself (both multiplexing shared connections and clients'
# private connections. So, clients connected to the proxy won't need to issue
# the Redis AUTH command in order to be authenticated.
#
# auth mypassw
auth your_redis_cluster_password
# 這個節點是redis 6.0之後的用戶名,這裡沒有指定
# Authentication username (supported by Redis >= 6.0)
#
# auth-user myuser
################################# LOGGING #####################################
# Log level: can be debug, info, success, warning o error.
log-level error
# Dump queries received from clients in the log (log-level debug required)
#
# dump-queries no
# Dump buffer in the log (log-level debug required)
#
# dump-buffer no
# Dump requests' queues (requests to send to cluster, request pending, ...)
# in the log (log-level debug required)
#
# dump-queues no
啟動redis-cluster-proxy,./bin/redis-cluster-proxy -c ./proxy.conf
需要註意的是,首次運行時直接列印出來啟動日誌或者異常信息,保證可以正常啟動,然後再以daemonize方式運行
因為筆者一開始遇到了一些錯誤,發現同樣的錯誤,控制台直接列印出來的日誌,跟daemonize方式運行列印到文件的日誌不完全一致。
redis-cluster-proxy嘗試
與普通的redis 集群鏈接方式不同,redis-cluster-proxy模式下,客戶端可以連接至redis-cluster-proxy節點,而無需知道Redis集群自身的詳細信息,這裡嘗試執行一個multpile操作
這裡使用傳統的集群鏈接方式,來查看上面multiple操作的數據,可以發現的確是寫入到集群中不同的節點中了。
故障轉移測試
簡單粗暴地關閉一個主節點,這裡直接關閉192.168.0.61節點,看看redis-cluster-proxy能否正常讀寫
1,首先redis cluster自身的故障轉移是沒有問題的,完全成功
2,192.168.0.64接替192.168.0.61成為主節點
3,proxy節點操作數據卡死
查看redis-cluster-proxy的日誌,說192.168.0.61節點無法連接,proxy失敗退出
由此可見,正如日誌里說明的,Redis Cluster Proxy v999.999.999 (unstable),期待有更穩定的版本推出。
類似問題作者本人也有回應,參考:https://github.com/RedisLabs/redis-cluster-proxy/issues/36
The Proxy currently requires that all nodes of the cluster must be up at startup when it fetches the cluster's internal map.
I'll probably change this in the next weeks.
redis-cluster-proxy是完美的解決方案?
因為剛推出不久,生產環境基本上不會有太多實際的應用,裡面肯定有不少坑,但不妨害對其有更多的期待。
初次嘗試可以感受的到,redis-cluster-proxy是一個非常輕量級,清爽簡單的proxy代理層,它解決了一些redis cluster存在的一些實際問題,對應於程式來說也帶來了一些方便性。
如果沒有源碼開發能力,相比其他第三方proxy中間件,必須要承認官方可靠性和權威性。
那麼,redis-cluster-proxy是一個完美的解決方案麽,留下兩個問題
1,如何解決redis-cluster-proxy單點故障?
2,proxy節點的如何面對網路流量風暴?