MongoDB Storage EnginesThis blog is another in the series for the Percona Server for MongoDB 3.4 bundle release. Today’s blog post is about how to migrate between Percona Server for MongoDB storage engines without downtime.

Today, the default storage engine for MongoDB is WiredTiger. In previous versions (before 3.2), it was MMAPv1.

Percona Server for MongoDB features some additional storage engines, giving you the freedom for a DBA to choose the best storage based on application workload. Our storages engines are:

By design, each storage engine has its own algorithm and disk usage patterns. We simply stop and start Percona Server for MongoDB using different storage engines.

There are two common methods to change storage engines. One requires downtime, and the second doesn’t.

All the database operations are the same, even if it is using a different storage engine. From the database perspective, it doesn’t matter what storage engine gets used. The database layer asks the persistence API to save or retrieve data regardless.

For a single database instance, the best storage engine migration method is to start replication and add a secondary node with a different storage engine. Then stepdown() the primary, making the secondary the new primary (killing the old primary).

However, this isn’t always an option. In this case, create a backup and use the backup to restore the database.

In the following set of steps, we’ll explain how to migrate a replica set storage engine from WiredTiger to RocksDB without downtime. I’m assuming that the replica set is already configured and doesn’t have any replication lag.

Please follow the instructions below:

  1. Check replica set status and identify the primary and secondaries. (Part of the output has been hidden to make it easier to read.):
  2. Choose the secondary for the new storage engine, and change its priority to 0:


    We are going to work with test:27018 and test:27019. They are respectively the index 1 and 2 in the array members.
  3. Change the last secondary to the first instance to replace the storage engine:

  4. Check if the configuration is in place:
  5. Then stop the desired secondary and wipe the database folder. As we are running the replica set in a testing box, I’m going to kill the process running on port 27019. If using services please run: sudo service mongod stop on the secondary box. Before starting the mongod service, add the --storageEngine parameter to the config file or application parameter:

  6. This instance is now using the RocksDB storage engine and will perform an initial sync. When it finishes, to get the data from the primary node remove the hidden = false flag and let the application query this box:
  7. Repeat step 6 for box test:27018, and use the following command as step 6. This makes one of the secondaries become the primary. Please be sure all secondaries are healthy before proceeding:
  8. When both secondaries are available for reading and in sync with the primary, we need to change the primary’s storage engine. To do so, please perform a stepdown() in the primary, making this instance secondary. An election is triggered (and may take a few seconds to complete):
  9. Please identify the new primary with rs.status() and repeat the step 5 and 7 with the old primary.

After this process, the instances will run RocksDB without experiencing downtime (just an election to change the primary).

Please feel free to ping us on Twitter @percona with any questions and suggestions for this blog post.

1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Jerwin Roy

But we need to have a enough oplog size configured for the initial sync process to complete.