-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dedicated etcd nodes #491
Comments
Provisioning etcd clusters is out of scope for kubeadm. You can provision a single-node etcd instance listening on localhost with We do support pointing to an external etcd cluster though. I think that should be enough for your use-case. Does that make sense? Thanks! |
Agree with @luxas. Since kubeadm deploys etcd using kubernetes, the kubelet would need to be installed on the etcd node as a minimum, which seems a bit sub-optimal IMO. We don't want to go down the path of making kubeadm deploy distro-specific configuration files like systemd either. I would like kubeadm to support self-hosted etcd at some point, though. |
@mumoshu I'm closing this, as it is out of scope to create generic etcd clusters. We do what's needed to get a master up and running, but won't reinvent the wheel for generic clusters, there are many other good tools for that |
@luxas @jamiehannaford Thank you very much for the detailed explanations and clarifications - I now agree that kubeadm should not re-invent the wheel here.
Just curious but even after that, would the dedicated etcd node(s) support be out-of-scope? Thanks for maintaining the great project btw! Really looking forward to utilize kubeadm in my work. |
Yes, dedicated etcd nodes would be out of scope still. However, there would be one etcd peer per master node, colocated with the API server |
@luxas I see - thanks again for the clarification 👍 |
how can I specify the external etcd cluster? Didnt find in the documentation the option for that. Do I must use an yaml file for that? |
@nelsonfassis If you look at the docs for the config file (https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file), you can specify them as a YAML list. See etcd > endpoints section. |
@jamiehannaford exactly, I'd like to use something like kubeadm init --etcd-cluster="". So I guess it is not an option, right? |
@nelsonfassis That CLI flag is not supported, you will need to use the configuration file. |
@jamiehannaford Thank you for the clarification. I set up a cluster without making the etcd cluster external of kubernetes, so now I have only one etcd pod running which is a very dangerous setup. 2 - What would be the implication of running instances of this etcd pod on all nodes as Daemonsets? Wouldn't it be more reliable and as easy to configure with kubeadm as the current way? Thank you for your help :) |
@nelsonfassis It depends on your use case. If you're using k8s for informal workloads (dev, staging, nonpublic) then self-hosted etcd via kubeadm is usually fine. For production workloads we recommend something a bit more robust like systemd units, either on separate servers or colocated on the masters. |
I've specified my external etcd cluster in a basic config.yml file but my kube cluster fails to fully boot. I'm running this in a boot script for an aws instance. My basic steps are
config.yml apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
etcd:
endpoints:
- http://${etcd_1_ip}:2379
- http://${etcd_2_ip}:2379
- http://${etcd_3_ip}:2379
token: ${k8stoken} kubeadm init --config config.yml journalctl output: (truncated)
|
I am seeing the same issue as @dylanfoster, why are the logs complaining about no CNI found when that is listed as the next step in setting up the cluster? Do we need to populate that entire configure file template or will kubeadm set all other values itself? |
This is probably a FEATURE REQUEST.
Would it be possible to add support for dedicated etcd nodes into kubeadm?
As of today, several k8s cluster provisioners like kops, kube-aws, etc. supports the setup uses dedicated nodes for etcd - in other words, etcd nodes are separated from master/control-plane/controller nodes running apiserver, controller-manager, and so on - for extra reliability of etcd clusters for large k8s clusters.
For me, the setup seems to make sense regardless of we use kubeadm or not for cluster bootstrapping.
You may already know but in case you missed them, please see coreos/etcd-operator#40 and https://coreos.com/operators/etcd/docs/latest/best_practices.html for more explanations of the setup.
Some more clarifications:
etcd.endpoints
in a kubeadm master confguration file like described in https://kubernetes.io/docs/admin/kubeadm/#sample-master-configuration.The text was updated successfully, but these errors were encountered: