Etcd
Check the Etcd prerequisites page for a step by step installation of Etcd.
Best practices
StorageOS uses etcd as a service, whether it is deployed following the step by step instructions or as a custom installation. It is expected that the user maintains the availability and integrity of the etcd cluster.
It is highly recommended to keep the cluster backed up and ensure high availability of its data.
Low latency
It is important to keep the latency between StorageOS nodes and the etcd replicas low. Deploying an etcd cluster in a different data center or region can make StorageOS detect etcd nodes as unavailable due to latency. 10ms latency between StorageOS and etcd would be the maximum threshold for proper functioning of the system.
Monitoring
It is highly recommended to add monitoring to the etcd cluster. Etcd serves
Prometheus metrics on the client port http://etcd-url:2379/metrics
.
You can use StorageOS developed Grafana Dashboards for etcd. When using etcd for production, you can use the etcd-cluster-as-service, while the etcd-cluster-as-pod can be used when using etcd from the operator.
Defragmentation
Etcd uses revisions to store multiple versions of keys. Compaction removes all key revision prior to a certain revision from etcd. Typically the etcd configuration enables the automatic compaction of keys to prevent performance degradation and limit the storage required. Compaction of revisions can create fragmentation that means space on disk is available for use by etcd but is unavailable for use by the file system. In order to reclaim this space, etcd can be defragmented.
Reclaiming space is important because when the etcd database file grows over the “DB_BACKEND_BYTES” parameter, the cluster triggers an alarm and sets itself read only and only allows reads and deletes. To avoid hitting the db backend bytes limit, compaction and defragmentation are required. How often defragmentation is required depends on the churn of key revisions in etcd.
The Grafana Dashboards mentioned above indicate when nodes require defragmentation. Be aware that defragmentation is a blocking operation that is performed per node, hence the etcd node will be locked for the duration of the defragmentation. Defragmentation usually takes a few milliseconds to complete.
You can also set cronjobs that execute the following defragmentation script. It will run a defrag when the DB is at 80% full. A defragmentation operation has to be executed per etcd node and it is a blocking operation. It is recommended to not execute the degragmentation in all etcd members at the same time. If using a cronjob, set them up for different times.
curl -sSLo defrag-etcd.sh https://raw.githubusercontent.com/storageos/deploy/master/k8s/deploy-storageos/etcd-helpers/etcd-ansible-systemd/roles/install_etcd/templates/defrag-etcd.sh.j2
chmod +x defrag-etcd.sh
Known CoreOS Etcd Operator issues
This topology is only recommended for deployments where isolated nodes cannot be used.
Etcd is a distributed key-value store database focused on strong consistency. That means that etcd nodes perform operations across the cluster to ensure quorum. If quorum is lost, etcd nodes stop and etcd marks its contents as read-only. This is because it cannot guarantee that new data will be valid. Quorum is fundamental for etcd operations. When running etcd in pods it is therefore important to consider that a loss of quorum could arise from etcd pods being evicted from nodes.
Operations such as Kubernetes Upgrades with rolling node pools could cause a total failure of the etcd cluster as nodes are discarded in favor of new ones.
A 3 etcd node cluster can survive losing one node and recover, a 5 node cluster can survive the loss of two nodes. Loss of further nodes will result in quorum being lost.
The etcd-operator doesn’t support a full stop of the cluster. Stopping the etcd cluster causes the loss of all the etcd keystore and make StorageOS unable to perform metadata changes.
The official etcd-operator repository also has a backup deployment operator that can help backup etcd data. A restore of the etcd keyspace from a backup might cause issues due to the disparity between the cluster state and its metadata in a different point in time. If you need to restore from a backup after a failure of etcd, contact the StorageOS support team.