Since Valkey (a fork of Redis) is around the corner, I thought to write a short blog post about some of the configuration parts, mainly discussing how to dynamically change certain settings and persist those inside the configuration file.
Disk persistence
Let me start with a very important setting, which is “SAVE,” that helps in performing a synchronous dump of the dataset/keys to the disk that would be a perfect point-in-time snapshot for data restoration purposes or recovery.
In the below example we have some default setting for saving a snapshot.
1 2 3 | 127.0.0.1:6379> config get save 1) "save" 2) "3600 1 300 100 60 10000" |
These numbers tell the below representations.
- 3600 1 – This one will save a snapshot of the DB every 3600 seconds if at least 1 write operation were performed.
- 300 100 – This one will save a snapshot of the DB every 300 seconds if at least 100 write operations were performed.
- 60 10000 – This one will save a snapshot of the DB every 60 seconds if at least 10000 write operations were performed.
We can change these settings with the help of the below dynamic command. Now, we have changed the snapshot option to “60 1” which means the save will happen in every 60 seconds if at least one write operation is performed.
1 2 | 127.0.0.1:6379> config set save "60 1" OK |
So the changes work perfectly.
1 2 3 | 127.0.0.1:6379> config get save 1) "save" 2) "60 1" |
Further, we can persist the settings permanently in the [valkey.conf] file as below.
1 2 | 127.0.0.1:6379> config rewrite OK |
So, the changes reflect pretty well.
1 2 3 4 | grep -i 'save' /etc/valkey/valkey.conf … save 60 1 … |
In a production environment, we should avoid running the “SAVE” command directly, as it can block the workload. BGSAVE is a good option for an ad-hoc run that works in the background and doesn’t affect the running clients.
Having a snapshot(RDB) file alone is not sufficient for restore, and we can still lose some keys/data in case of a crash or corruption in the database. Here AOF (Append Only File) comes very handy as it ensures each write is persisted in the log file. Later on, if the server restarts, these logs can be replayed again, ensuring the original state of the dataset.
In our case, it’s already enabled here.
1 2 3 | 127.0.0.1:6379> config get appendonly 1) "appendonly" 2) "yes" |
Memory usage
As Valkey/Redis is mainly used for cache purposes, the amount of data that can be stored depends on the amount of allocated memory. We can control/decide how much memory Valkey will use via the parameter [maxmemory].
Here, we allocate 256 MB to Valkey to use for its operation.
1 2 | 127.0.0.1:6379> config set maxmemory 256mb OK |
Similarly we can persist the changes as below.
1 2 | 127.0.0.1:6379> config rewrite OK |
1 2 3 4 | grep -i 'maxmemory' /etc/valkey/valkey.conf … maxmemory 256mb … |
Now what happens if the above memory allocations reach the limit?
In that case, we have some eviction policies defined under [maxmemory-policy] settings as below.
- allkeys-lru: Keeps most recently used keys; removes least recently used (LRU) keys
- allkeys-lfu: Keeps frequently used keys; removes least frequently used (LFU) keys
- volatile-lru: Removes least recently used keys with the expire field set to true.
- volatile-lfu: Removes least frequently used keys with the expire field set to true.
- allkeys-random: Randomly removes keys to make space for the new data added.
- volatile-random: Randomly removes keys with expire field set to true.
- volatile-ttl: Removes keys with expire field set to true and the shortest remaining time–to-live (TTL) value.
- noeviction: New values aren’t saved when memory limit is reached. When a database uses replication, this applies to the primary database
Reference:- https://redis.io/docs/latest/develop/reference/eviction/
Here, we set a noeviction policy that doesn’t allow any new key writes.
1 2 3 | 127.0.0.1:6379> config get maxmemory-policy 1) "maxmemory-policy" 2) "noeviction" |
Server-client config
In Valkey/Redis, we can control the number of clients connected to the database with the help of [maxclients] settings.
1 2 3 | 127.0.0.1:6379> config get maxclients 1) "maxclients" 2) "10000" |
This can also be changed dynamically by [config set] and persisted by [config rewrite].
Sometimes, the clients can get discounted while executing some heavy workload/keys. This could happen due to reaching the hard/soft limits mentioned under settings [client-output-buffer-limit].
This can affect multiple channels like pubsub model or replication/slave, etc.
1 2 3 | 127.0.0.1:6379> config get client-output-buffer-limit 1) "client-output-buffer-limit" 2) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60" |
If we want to change the value with respect to a specific channel we can perform the same as below.
1 2 | 127.0.0.1:6379> config set client-output-buffer-limit "slave 268435456 67108864 30" OK |
Similarly it can be persisted by config rewrite.
Replication
In some occasions, the slave nodes can be disconnected or lost for a long duration, and when they come back online, the need for a full resync may emerge. This can be controlled by [repl-backlog-size] settings.
Basically, the bigger the replication backlog[repl-backlog-size] size is, the longer the slave can be disconnected from the source/master. Once the slave is back again it can join to the master via a partial resync.
1 2 3 | 127.0.0.1:6379> config get repl-backlog-size 1) "repl-backlog-size" 2) "1048576" |
This can be also set dynamically and persisted to the disk.
Apart from the configurations there are some risky commands worth mentioning here. We should always be cautious while running commands like [FLUSHALL & FLUSHDB]. These can wipe out the whole keys or dataset from the database environment.
Summary
In this blog post, I mainly explained how to adjust/configure some of the important settings in Valkey and highlighted some key configurations that can impact the workload. Since Valkey is in its early phases, stay tuned for more coverage of this technology in the future!!