Binds, exports, and gossip, oh my!



I’ve been doing some work with the core/consul plan as it does some awesome things, but I need it to do some other things. Also, as this is my first real foray into Habitat, it provides the perfect vehicle for learning about Habitat!

Which brings me to today’s question.

How do I enable Habitat to autodiscover peers that I want to “consume”? And, can a package be both a “producer” and “Consumer” within the same plan?

Crash course lesson in Consul:

Consul can run in either “server” or “client” mode. The difference being that “servers” participate in gossip, cluster leader election, etc. where “clients” do not. The difference is servers are started with the -server command line parameter. Either way, when run in a cluster, consul needs the -retry-join parameter to tell it at least one other node to connect to. (see #1514 for some context)

What I did there was:

{{#eachAlive svc.members as |member| ~}}
  -retry-join {{member.sys.ip}} \
{{/eachAlive ~}}

If my understanding is correct, this looks for other nodes that have been marked as a --peer to habitat, e.g. when I start my hab sup, I do a hab sup --peer and when I start my service as hab svc start core/consul --group production this starts consul in the consul.production group. Thus when my plan is compiled, my members are resolved to IPs based on who’s running the core/consul package in the production group.

This seems to work as I need it (with the exception there seems to be no way to exclude svc.member.self but that’s part of a larger conversation).

Now I want to start consul as a client. So my first thought was, I’ve got my cluster started with the above hook… so I should be able to just add --bind consul:consul.production, right? But it doesn’t seem to work that way…

How about an example:

Consul Cluster:

  • Server node 1 -
  • Server node 2 -
  • Server node 3 -
  • Client node 1 -

Nodes are started with:

# node 1:
hab start core/consul --topology leader --group production --strategy rolling --peer --peer
# node 2:
hab start core/consul --topology leader --group production --strategy rolling --peer --peer
# node 3:
hab start core/consul --topology leader --group production --strategy rolling --peer --peer

This spins up, and generates a run hook that looks like:

#!/bin/bash -xe

exec 2>&1


if [ "$SERVERMODE" = true ] ; then
  exec consul agent -ui \
          -retry-join \
          -retry-join \
          -retry-join \
          -server -bootstrap-expect 3 \
  exec consul agent -dev

At this point, somewhere there’s a service defined as consul and it has three nodes, right? (It would be really nice to be able to access this information via hab sup commands)

So I was thinking, if we updated the run hook, I could have the hook iterate through the bindings, a la:

  {{~#if cfg.server.mode ~}} -server -bootstrap-expect {{cfg.bootstrap.expect}}
  {{#eachAlive svc.members as |member| ~}}
    -retry-join {{member.sys.ip}}
  {{/eachAlive ~}}
  {{/if ~}}
  {{~ #eachAlive bind.consul.members as |member|}}
    -retry-join {{member.sys.ip}}
  {{~/eachAlive ~}}

And start my client as:

hab start core/consul --bind consul:consul.production --peer

This would mean if the following were set:

mode = false

My stanza that includes -server -bootstrap-expect parameters, those attributes would be excluded from my run hook.

But my bind.consul.memebers don’t seem to resolve during the compile phase…

(The big caveat here is that the client node IP address should never appear as a -retry-join parameter. So there’s never a point where -retry-join would be acceptable as part of the startup command for either servers or clients.)

Does the plan need to define pkg_exports?

Could I do a:


Would that allow me to iterate through #eachAlive bind.consul.members and start my consul client as hab start core/consul --bind consul:consul.production --peer or what would that syntax look like?



Maybe the answer is that we need a consul and a consul-client plan. But I’m still having trouble groking how I’d have the “client” autodiscover it’s peers/how binding to a service works exactly.


based on this thread I think I’m on the right track here… but that only seems to allow binding to ports?


I’m far from a consul expert, but here is my $0.02.

I would consider having a consul-client package, which depends on consul, and has just the logic to run the client. You then can use the bindings to do precisely what you propose above - you start consul-client, bind it to the servers, and move along.

With the server logic, you could just include the leader as the retry-join flags for the followers, and on the leader include just the followers. That would make sure you never include your own IP.

So on the client, you would start hab start consul-client --bind consul:consul.production, and you wind up with two service groups - one for the servers, one for the clients.


It works! It works!!!

I think some more documentation is in order, but I’m at the airport and my battery is nearly dead. The gist of it is, build consul server cluster, then start consul client.

# If you haven't already export core/consul to a docker container
hab studio enter  # This is required on a Mac and probably Windows, this can be skipped in linux
hab pkg export docker core/consul

# Start your consul cluster:
##  Terminal 1
docker run -p8500:8500 -e HAB_CONSUL='{ "server": { "bind": "" }, "client": { "bind": "" }}' core/consul --topology leader

## Terminal 2 & 3+
docker run -e HAB_CONSUL='{ "server": { "bind": "" }}' core/consul --peer --topology leader

# Export the qubitrenegade/consul-client to docker container: (this can also be pulled from docker hub)
hab studio enter
hab pkg export docker qubitrenegade/consul-client

# Start your consul-client:
docker run -it qubitrenegade/consul-client --peer --bind consul-server:consul.default

(I haven’t tested this using native hab, but I think it should work?!..)