I'm not using kubernetes yet with habitat, but I've tested with docker swarm mode and have a similar situation with the peer in habitat. One thing I've tried is passing in the dns name of the current service, which will return any peer. So if I make a service (in kubernetes I think it's a deployment) called 'consul', I could just put '--peer consul' in the start and it would peer with one of the consul containers.
Depending on how things are set up in swarm mode, the DNS query either returns a VIP for the service which will load balance across all containers, or it will return multiple IPs in the DNS response, one for each container.
The problem I ran into is that you have a fairly good chance of that DNS query resulting in you connecting to yourself, and you end up with a node that is disconnected from the rest of the ring. I suspect this could be fixed if habitat can be made to behave such that if it finds itself connecting to itself, it will try the next peer in the list, and will also try connecting to all peers if it gets multiple IPs in response to a DNS lookup. I didn't get chance to test that behavior properly however as I ran into an unrelated issue, so it's possible habitat does the right thing here already.
In the meantime, my temporary solution was to make multiple services (e.g. consul1, consul2, consul3) and specify
--peer consul2,consul3 when starting the consul1 container. This will work for a service with a fixed amount of nodes, and then that service could act as the foreman for the rest of your services later on.