Clusters

Specify cluster requirements with the swarm section.

In addition to managing configs and secrets, The swarm section of the YAML can be used to specify node counts and labels that must be applied to the Swarm cluster where your app is running.

Below is an example of the swarm section of an application config YAML.

swarm:
  minimum_node_count: 4
  nodes:
  - role: manager
    labels:
      mongo: primary
    minimum_count: 1
  - role: worker
    labels:
      mongo: secondary
    minimum_count: 2

Minimum Node Count

When minimum_node_count is specified and there are an insufficient number of nodes in the cluster, the Cluster page of the Replicated Console will render a node count label in orange to indicate that more nodes should be added. The label will switch to green when the required number of nodes are in the cluster. Having an insufficient number of nodes will not prevent Replicated from attempting to start the application.

Nodes

The swarm.nodes section is a list of requirements that apply to groups of one or more nodes. Each group will have a label at the top of the Cluster page that will render in orange or green depending on whether all requirements for the group have been met.

minimum_count

The number of nodes that should have the role and labels specified

role

Should be manager or worker

labels

A string map. The Cluster page provides UI for adding and removing labels on individual nodes, equivalent to running docker node update --label-add and docker node update --label-rm on the command line.

Task Placement

Use Swarm’s placement constraints and preferences to assign tasks to specific nodes in a cluster. This feature can be used to ensure that stateful services depending on volumes are not rescheduled to another node. When a snapshot is taken of a volume, Replicated stores the role and labels of the node that volume was on. This allows Replicated to restore the volume to the correct node for the service when recovering from a snapshot.

services:
  mongoreplicas:
    image: mongo:3.2
    ports:
    - "27017"
    volumes:
    - mongodata:/data
    deploy:
      replicas: 2
      mode: replicated
      placement:
        constraints: [node.labels.mongo==secondary]
        preferences:
        - spread: node.labels.mongo