Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

adambom
Copy link

@adambom adambom commented Oct 24, 2016

If you ever lose your etcd cluster for whatever reason, or if you should ever need to restart it, you should be able to recover your state. Mentioned in this issue: #75

I have a module that will automatically snapshot the drives. If that's at all of interest, happy to commit that as well.

If you ever lose your etcd cluster for whatever reason, or if you should ever need to restart it, you should be able to recover your state. Mentioned in this issue: kz8s#75
@rimusz
Copy link

rimusz commented Oct 24, 2016

the snapshot module PR would be awesome too

@adambom
Copy link
Author

adambom commented Oct 24, 2016

@rimusz Ok I've updated this to take automatic snapshots of ALL ebs volumes associated with the cluster at a rate of once every two hours. This includes the etcd volumes as well as any persistent volumes created by Kubernetes.

Happy to split this into two PR's if you'd prefer to review it that way.

For some background, originally I tried doing the snapshotting using only cloudwatch, but it wouldn't work without some manual configuration, so I ended up following the lambda approach described here. This also gives us the added benefit of backing up the dynamically generated volumes as well since it's based on tags.

@wellsie wellsie self-assigned this Oct 24, 2016
@wellsie
Copy link
Member

wellsie commented Oct 24, 2016

ty @adambom !!

No need to split into two PRs - this one is fine.

If you wanted to add a few notes to the readme, that would be great - np if not, I will add.

@adambom
Copy link
Author

adambom commented Oct 24, 2016

@wellsie will do

@adambom
Copy link
Author

adambom commented Oct 24, 2016

Done

resource "aws_volume_attachment" "etcd" {
count = "${ length( split(",", var.azs) ) }"

device_name = "/dev/xvdf"
Copy link
Member

@wellsie wellsie Oct 25, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adambom - Where are you mounting xvdf ? Or am I missing something ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I forgot to include some changes to cloud-config.tf. On startup we format /dev/xvdf and mount it to /media/etcd2 and instruct etcd to use that location to persist state. I chose xvdf because that's the device we're using on worker nodes for ephemeral storage. See 8e73f42.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants