This is just to spark some discussion revolving deploying habitat containers into Kubernetes. From just deploying habitat on to anything I see a lot of utility in having an understanding what exists so that the node itself can make lots of intelligent configuration decisions.
- Where is my database?
- Who is my master if it exists?
- Should I be a slave or master?
- Who are my neighbors like me?
In the context of Kubernetes, out of the box I can already answer 1 easily as I can specify a Database Service that is either internal or external and I’ll have a dns alias or an environment variable that I can inject into my pods.
However, questions 2 thru 4 are difficult to answer with k8s. To my knowledge, Pods can talk to each other pod in the service if you know exactly the IPs or alias of each pod. This info was scraped manually by interacting with kubectl. I admit I don’t know how to get this info within the Pod.
Now with Habitat, if I were to setup the ring, I will get to answer question 2 thru 4. If I do this in Kubernetes, I have to pass in
--peer <PEER> at pod/or container creation time. That peer info is not easily obtained dynamically. One route I took is to establish some single service called the foreman. I take that pod IP and used that to bootstrap. Its not very ideal because I have to setup another service and I worry about pod dying.
Like to know what others are doing with Habitat in Kubernetes. If anyone is exploiting the gossip rings. How are people are bootstrapping their containers?