This document covers what Rekor log sharding is and how to shard the log.
What is sharding?
When Rekor is started for the first time, its backend is a transparency log built on a single Merkle Tree. This log can grow indefinitely as entries are added, which can present issues over time. To resolve some of these issues the log can be "sharded" into multiple Merkle Trees.
Why do we shard the log?
Sharding the log allows for:
- Freezing the current log and rotating signing keys if needed
- Easier and faster querying for entries from the tree
- Easier scaling and platform migrations
How does this impact user experience?
It shouldn't! End users shouldn't notice any difference in their experience. They can still query via UUID, and Rekor will find the correct entry from whichever shard it's in. Querying by log index works as well, since log indices are distinct and increase across shards.
For more details around sharding, see the original design doc!
Note: You'll need to join the firstname.lastname@example.org Google group for access to the doc.
How do I shard the Rekor log?
Sharding the Rekor log will require some downtime in your Rekor service. This is necessary because you'll need the length of the current shard later on, so new entries can't be added while sharding is in progress.
Follow these steps to shard the log:
- Stop all traffic to Rekor so new entries can't be added to the log
- Store the tree ID and length of the current active shard:
CURRENT_TREE_ID=$(rekor-cli loginfo --format json | jq -r .TreeID) CURRENT_SHARD_LENGTH=$(rekor-cli loginfo --format json | jq -r .TreeSize)
- Connect to the production cluster. Port-forward the running
trillian_logservercontainer and run the createtree script.
This will create a new Merkle Tree which will become the new active shard.
kubectl port-forward -n trillian-system deploy/trillian-log-server 8090:8090 # This is the Tree ID of the new active shard NEW_TREE_ID=$(createtree --admin_server localhost:8090)
- Update the Rekor
sharding-configConfigMap with details of the inactive shard:
kubectl edit configmap sharding-config -n rekor-system
Append the following onto the
sharding-config.yaml key (it will be empty if this is the first shard):
- treeID: $CURRENT_TREE_ID treeLength: $CURRENT_SHARD_LENGTH
- In your rekor-server Deployment, update the
--trillian_log_server.tlog_idflag to point to the new Tree ID.
Redeploy Rekor to the cluster with these changes.
Restart traffic to your Rekor service.
Congratulations, you've successfully sharded the log!