Configurators
To use a configurator, set it in the implementation
field of an Operation
and set its inputs as documented below. Configurator names are case-sensitive;
if a configurator name isn’t found it is treated as an external command.
If you set an external command line directly as the implementation
, Unfurl will choose the appropriate one to use.
If operation_host
is local it will use the Shell configurator, if it is remote,
it will use the Ansible
configurator and generate a playbook that invokes it on the remote machine:
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
topology_template:
node_templates:
test_remote:
type: tosca:Root
interfaces:
Standard:
configure: echo "abbreviated configuration"
@operation(name="configure")
def test_remote_configure(self, **kw):
return unfurl.configurators.shell.ShellConfigurator(
command='echo "abbreviated configuration"',
)
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Available configurators include:
Ansible
The Ansible configurator executes the given playbook. You can access the same Unfurl filters and queries available in the Ensemble manifest from inside a playbook.
Example
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
topology_template:
node_templates:
test_remote:
type: tosca:Root
interfaces:
Standard:
configure:
implementation: Ansible
inputs:
playbook:
# quote this yaml so its templates are not evaluated before we pass it to Ansible
q:
- set_fact:
fact1: "{{ '.name' | eval }}"
- name: Hello
command: echo "{{ fact1 }}"
outputs:
fact1:
@operation(name="configure", outputs={"fact1": None})
def test_remote_configure(self, **kw):
return unfurl.configurators.ansible.AnsibleConfigurator(
playbook="""
- set_fact:
fact1: "{{ '.name' | eval }}"
- name: Hello
command: echo "{{ fact1 }}"
"""
)
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Inputs
- playbook
(required) If string, treat as a file path to the Ansible playbook to run, otherwise treat as an inline playbook
- inventory
If string, treat as a file path to an Ansible inventory file or directory, otherwise treat as in inline YAML inventory. If omitted, the inventory will be generated (see below)
- extraVars
A dictionary of variables that will be passed to the playbook as Ansible facts
- playbookArgs
A list of strings that will be passed to
ansible-playbook
as command-line arguments- resultTemplate
Same behavior as defined for Shell but will also include
outputs
as a variable.
Outputs
Keys declared as outputs are used as the names of the Ansible facts to be extracted after the playbook completes.
implementation
key notes
- operation_host
If set, names the Ansible host.
- environment
If set, environment directives will processed and passed to the playbook’s
environment
Playbook processing
The playbook
input can be set to a full playbook or a list of tasks. If inventory is auto-generated and the “hosts” keyword is empty or missing from the playbook, “hosts” will be set to the host found in the auto-generated inventory, as described below.
Inventory
If an inventory file isn’t specified in inputs
, Unfurl will generate an Ansible inventory for the target host. The target host will be selected by searching for a node in the following order:
The
operation_host
if explicitly set.The current target if it looks like a host (i.e. has an Ansible or SSH endpoint or is a Compute resource)
Search the current target’s
hostedOn
relationship for a node that looks like a host.Fallback to “localhost” with a local ansible connection.
The inventory facts for the selected host is built from the following sources:
If host has an
endpoint
of typeunfurl.capabilities.Endpoint.SSH
orunfurl.capabilities.Endpoint.Ansible
use that capability’shost
,port
,connection
,user
, andhostvars
properties.If there is a relationship template or connection of type
unfurl.relationships.ConnectsTo.Ansible
that targets the endpoint, uses itscredential
andhostvars
properties. (These can be set in the environment’s Connections section.)If the host is declared as a member of group of type
unfurl.groups.AnsibleInventoryGroup
in the service template, the group’s name will be added as an ansible group along with the contents of the group’shostvars
property.If
ansible_host
wasn’t previously set,ansible_host
will be set to the host’s public_ip orprivate_ip
in that order if present, otherwise set it tolocalhost
.If the host is a Google compute instance the host name will be set to
INSTANCE_NAME.ZONE.PROJECT
e.g.instance-1.us-central1-a.purple-sanctum-25912
. This is for compatibility with thegcloud compute config-ssh
command to enable Unfurl to use those credentials.
Execution environment
Unfurl runs Ansible in an environment isolated from your machine’s Ansible installation and will not load the ansible configuration files in the standard locations. If you want to load an Ansible configuration file set the
ANSIBLE_CONFIG
environment variable. If you want Ansible to search standard locations set to an empty or invalid value likeANSIBLE_CONFIG=
. (See also the Ansible Configurations Documentation)Note: Because Ansible is initialized at the beginning of execution, if the
--no-runtime
command option is used or if no runtime is availableANSIBLE_CONFIG
will only be applied in the environment that executes Unfurl. It will not be applied if set via environment declaration.
Cmd
The Cmd
configurator executes a shell command either using the Shell configurator described below
or the Ansible configurator is used to execute the command remotely if the operation_host
is remote.
As described above, this is the default if no configurator is specified.
Example
In this example, operation_host
is set to a remote instance so the command is executed remotely using Ansible.
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
topology_template:
node_templates:
test_remote:
type: tosca:Root
interfaces:
Standard:
configure:
implementation:
primary: Cmd
operation_host: staging.example.com
inputs:
cmd: echo "test"
import unfurl.configurators
import tosca
@tosca.operation(name="configure", operation_host="staging.example.com")
def test_remote_configure(self, **kw):
return unfurl.configurators.CmdConfigurator(
cmd='echo "test"',
)
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Delegate
The delegate
configurator will delegate the current operation to the specified one.
Inputs
- operation
(required) The operation to delegate to, e.g.
Standard.configure
- target
The name of the instance to delegate to. If omitted the current target will be used.
- inputs
Inputs to pass to the operation. If omitted the current inputs will be used.
- when
If set, only perform the delegated operation if its value evaluates to true.
Shell
The Shell
configurator executes a shell command.
Inline shell script example
This example executes an inline shell script and uses the cwd
and shell
input options.
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
topology_template:
node_templates:
shellscript-example:
type: tosca:Root
interfaces:
Standard:
configure:
implementation: |
if ! [ -x "$(command -v testvars)" ]; then
source testvars.sh
fi
inputs:
cwd: '{{ "project" | get_dir }}'
keeplines: true
# our script requires bash
shell: '{{ "bash" | which }}'
@operation(name="configure")
def shellscript_example_configure(self, **kw):
return unfurl.configurators.shell.ShellConfigurator(
command='if ! [ -x "$(command -v testvars)" ]; then\n source testvars.sh\nfi\n',
cwd=Eval('{{ "project" | get_dir }}'),
keeplines=True,
shell=Eval('{{ "bash" | which }}'),
)
shellscript_example = tosca.nodes.Root()
shellscript_example.set_operation(shellscript_example_configure)
Example with artifact
Declaring an artifact of a type that is associated with the shell configurator ensures Unfurl will install the artifact if necessary, before it runs the command.
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
imports:
- repository: unfurl
file: tosca_plugins/artifacts.yaml
node_types:
artifact_example:
derived_from: tosca:Root
artifacts:
ripgrep:
type: artifact.AsdfTool
file: ripgrep
properties:
version: 13.0.0
interfaces:
Standard:
configure:
implementation:
primary: ripgrep
inputs:
cmd: rg search
from unfurl.tosca_plugins.artifacts import artifact_AsdfTool
import tosca
class artifact_example(tosca.nodes.Root):
ripgrep: artifact_AsdfTool = artifact_AsdfTool(
"ripgrep",
version="13.0.0",
file="ripgrep",
)
def configure(self, **kw):
return self.ripgrep.execute(
cmd="rg search",
)
Inputs
- command
(required) The command. It can be either a string or a list of command arguments.
- cwd
Set the current working directory to execute the command in.
- dryrun
During a during a dryrun job this will be either appended to the command line or replace the string
%dryrun%
if it appears in the command. (%dryrun%
is stripped out when running regular jobs.) If it is not set, the task will not be executed at all during a dry run job.- shell
If a string, the executable of the shell to execute the command in (e.g.
/usr/bin/bash
). A boolean indicates whether the command if invoked through the default shell or not. If omitted, it will be set to true ifcommand
is a string or false if it is a list.- echo
(Default: true) Whether or not should be standard output (and stderr) should be echod to Unfurl’s stdout while the command is being run. (Doesn’t affect the capture of stdout and stderr.)
- keeplines
(Default: false) If true, preserve line breaks in the given command.
- done
- resultTemplate
A Jinja2 template that is processed after shell command completes, it will have the following template variables:
Result template variables
All values will be either string or null unless otherwise noted.
- success
true unless an error occurred or the returncode wasn’t 0
- cmd
(string) The command line that was executed
- stdout
- stderr
- returncode
Integer (Null if the process didn’t complete)
- error
Set if an exception was raised
- timeout
(Null unless a timeout occurred)
The processing the resultsTemplate
is equivalent to passing its resulting YAML to update_instances
.
Outputs
No outputs are set, use a resultsTemplate
instead.
Template
The template configurator lets you implement an operation entirely within the template.
Inputs
- run
Sets the
result
of this task.- dryrun
During a
--dryrun
job used instead ofrun
.- done
If set, a map whose values passed as arguments to
unfurl.configurator.TaskView.done()
- resultTemplate
A Jinja2 template that is processed with results of
run
as its variables.
Outputs
Operation outputs are set from the outputs key on the done
input if present.
Terraform
The Terraform configurator will be invoked on any Node Template with the type unfurl.nodes.Installer.Terraform.
It can also be used to implement any operation regardless of the node type by setting the implementation
to Terraform
.
It will invoke the appropriate terraform command (e.g “apply” or “destroy”) based on the job’s workflow.
Unless you set the stateLocation
input parameter to “remote”, the Terraform configurator manages the Terraform state file itself
and commits it to the ensemble’s repository so you don’t use Terraform’s remote state – it will be self-contained and sharable like the rest of the Ensemble.
Any sensitive state will be encrypted using Ansible Vault.
During a --dryrun
job the configurator will validate and generate the Terraform plan but not execute it. You can override this behavior with the dryrun_mode
input parameter and you can specify dummy outputs to use with the dryrun_outputs
input parameter.
You can use the unfurl.nodes.Installer.Terraform
node type with your node template to the avoid boilerplate and set the needed inputs.
Example
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
imports:
- repository: unfurl
file: tosca_plugins/artifacts.yaml
topology_template:
node_templates:
terraform-example:
type: unfurl.nodes.Installer.Terraform
interfaces:
defaults:
inputs:
tfvars:
tag: test
main: |
variable "tag" {
type = string
}
output "name" {
value = var.tag
}
Standard:
operations:
configure:
import tosca
import unfurl
from unfurl.tosca_plugins.artifacts import unfurl_nodes_Installer_Terraform
import unfurl.configurators.terraform
@tosca.operation(
name="default", apply_to=["Install.check", "Standard.configure", "Standard.delete"]
)
def terraform_example_default(self, **kw):
return unfurl.configurators.terraform.TerraformConfigurator(
tfvars={"tag": "test"},
main="""
variable "tag" {
type = string
}
output "name" {
value = var.tag
}""",
)
terraform_example = unfurl_nodes_Installer_Terraform()
terraform_example.set_operation(terraform_example_default)
Inputs
- main
The contents of the root Terraform module or a path to a directory containing the Terraform configuration. If it is a directory path, the configurator will treat it as a local Terraform module. Otherwise, if
main
is a string it will be treated as HCL and if it is a map, it will be written out as JSON. (See the note below about HCL in YAML.) If omitted, the configurator will look inget_dir("spec.home")
for the Terraform configuration.- tfvars
A map of Terraform variables to passed to the main Terraform module or a string equivalent to “.tfvars” file.
- stateLocation
If set to “secrets” (the default) the Terraform state file will be encrypted and saved into the instance’s “secrets” folder. If set to “artifacts”, it will be saved in the instance’s “artifacts” folder with only sensitive values encrypted inline. If set to “remote”, Unfurl will not manage the Terraform state at all.
- command
Path to the
terraform
executable. Default: “terraform”- dryrun_mode
How to run during a dry run job. If set to “plan” just generate the Terraform plan. If set to “real”, run the task without any dry run logic. Default: “plan”
- dryrun_outputs
During a dry run job, this map of outputs will be used simulate the task’s outputs (otherwise outputs will be empty).
- resultTemplate
A Jinja2 template that is processed with the Terraform state JSON file as its variables. See the Terraform providers’ schema documentation for details but top-level keys will include “resources” and “outputs”.
Outputs
Specifies which outputs defined by the Terraform module that will be set as the operation’s outputs. If omitted and the Terraform configuration is specified inline, all of the Terraform outputs will be included. But if a Terraform configuration directory was specified instead, its outputs need to be declared here to be exposed.
Environment Variables
If the TF_DATA_DIR
environment variable is not defined it will be set to .terraform
relative to the current working directory.
Note on HCL in YAML
The json representation of the Terraform’s HashiCorp Configuration Language (HCL) is quite readable when serialized as YAML:
Example 1: variable declaration
variable "example" {
default = "hello"
}
Becomes:
variable:
example:
default: hello
Example 2: Resource declaration
resource "aws_instance" "example" {
instance_type = "t2.micro"
ami = "ami-abc123"
}
becomes:
resource:
aws_instance:
example:
instance_type: t2.micro
ami: ami-abc123
Example 3: Resource with multiple provisioners
resource "aws_instance" "example" {
provisioner "local-exec" {
command = "echo 'Hello World' >example.txt"
}
provisioner "file" {
source = "example.txt"
destination = "/tmp/example.txt"
}
provisioner "remote-exec" {
inline = [
"sudo install-something -f /tmp/example.txt",
]
}
}
Multiple provisioners become a list:
resource:
aws_instance:
example:
provisioner:
- local-exec
command: "echo 'Hello World' >example.txt"
- file:
source: example.txt
destination: /tmp/example.txt
- remote-exec:
inline: ["sudo install-something -f /tmp/example.txt"]
Installers
Installation types already have operations defined. You just need to import the service template containing the TOSCA type definitions and declare node templates with the needed properties and operation inputs.
Docker
Required TOSCA import: configurators/templates/docker.yaml
(in the unfurl
repository)
unfurl.nodes.Container.Application.Docker
TOSCA node type that represents a Docker container.
artifacts
- image
(required) An artifact of type
tosca.artifacts.Deployment.Image.Container.Docker
By default, the configurator will assume the image is in https://registry.hub.docker.com.
If the image is in a different registry you can declare it as a repository and have the image
artifact reference that repository.
Inputs
- configuration
A map that will included as parameters to Ansible’s Docker container module They are enumerated here
Example
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
imports:
- repository: unfurl
file: configurators/templates/docker.yaml
topology_template:
node_templates:
hello-world-container:
type: unfurl.nodes.Container.Application.Docker
artifacts:
image:
type: tosca.artifacts.Deployment.Image.Container.Docker
file: busybox
interfaces:
Standard:
inputs:
configuration:
command: ["echo", "hello world"]
detach: no
output_logs: yes
from unfurl.configurators.templates.docker import (
unfurl_nodes_Container_Application_Docker,
)
import tosca
hello_world_container = unfurl_nodes_Container_Application_Docker(
"hello-world-container",
image=tosca.artifacts.DeploymentImageContainerDocker(
"image",
file="busybox",
),
)
hello_world_container.Standard_default_inputs = dict(
configuration=dict(command=["echo", "hello world"], detach=False, output_logs=True)
)
DNS
The DNS installer support nearly all major DNS providers using OctoDNS.
Required TOSCA import: configurators/templates/dns.yaml
(in the unfurl
repository)
unfurl.nodes.DNSZone
TOSCA node type that represents a DNS zone.
Properties
- name
(required) DNS hostname of the zone (should end with “.”).
- provider
(required) A map containing the OctoDNS provider configuration
- records
A map of DNS records to add to the zone (default: an empty map)
- exclusive
Set to true if the zone is exclusively managed by this instance (removes unrecognized records) (default: false)
Attributes
- zone
A map containing the records found in the live zone
- managed_records
A map containing the current records that are managed by this instance
unfurl.relationships.DNSRecords
TOSCA relationship type to connect a DNS record to a DNS zone. The DNS records specified here will be added, updated or removed from the zone when the relationship is established, changed or removed.
Properties
- records
(required) A map containing the DNS records to add to the zone.
Example
apiVersion: unfurl/v1alpha1
kind: Ensemble
spec:
service_template:
imports:
- repository: unfurl
file: configurators/templates/dns.yaml
topology_template:
node_templates:
example_com_zone:
type: unfurl.nodes.DNSZone
properties:
name: example.com.
provider:
# Amazon Route53 (Note: this provider requires that the zone already exists.)
class: octodns.provider.route53.Route53Provider
test_app:
type: tosca.nodes.WebServer
requirements:
- host: compute
- dns:
node: example_com_zone
relationship:
type: unfurl.relationships.DNSRecords
properties:
records:
www:
type: A
value:
# get the ip address of the Compute instance that this is hosted on
eval: .source::.requirements::[.name=host]::.target::public_address
import unfurl
import tosca
from tosca import Eval
from unfurl.configurators.templates.dns import unfurl_nodes_DNSZone, unfurl_relationships_DNSRecords
example_com_zone = unfurl_nodes_DNSZone(
name="example.com.",
provider={"class": "octodns.provider.route53.Route53Provider"},
)
test_app = tosca.nodes.WebServer(
host=[tosca.find_node("compute")],
)
test_app.dns = unfurl_relationships_DNSRecords(
records=Eval(
{
"www": {
"type": "A",
"value": {
"eval": ".source::.requirements::[.name=host]::.target::public_address"
},
}
}
),
)[example_com_zone]
Helm
Requires Helm 3, which will be installed automatically if missing.
Required TOSCA import: configurators/templates/helm.yaml
(in the unfurl
repository)
unfurl.nodes.HelmRelease
TOSCA type that represents a Helm release. Deploying or discovering a Helm release will add to the ensemble any Kubernetes resources managed by that release.
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace
- repository
A node template of type
unfurl.nodes.HelmRepository
Properties
- release_name
(required) The name of the helm release
- chart
The name of the chart (default: the instance name)
- chart_values
A map of chart values
Inputs
All operations can be passed the following input parameters:
- flags
A list of flags to pass to the
helm
command
unfurl.nodes.HelmRepository
TOSCA node type that represents a Helm repository.
Properties
- name
The name of the repository (default: the instance name)
- url
(required) The URL of the repository
Kubernetes
Use these types to manage Kubernetes resources.
unfurl.nodes.K8sCluster
TOSCA type that represents a Kubernetes cluster. Its attributes are set by introspecting the current Kubernetes connection (unfurl.relationships.ConnectsTo.K8sCluster
).
There are no default implementations defined for creating or destroying a cluster.
Attributes
- apiServer
The url used to connect to the cluster’s api server.
unfurl.nodes.K8sNamespace
Represents a Kubernetes namespace. Destroying a namespace deletes any resources in it.
Derived from unfurl.nodes.K8sRawResource
.
Requirements
- host
A node template of type
unfurl.nodes.K8sCluster
Properties
- name
The name of the namespace.
unfurl.nodes.K8sResource
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace
Properties
- definition
(map or string) The YAML definition for the Kubernetes resource.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster.
- name
(string) The Kubernetes name of the resource.
unfurl.nodes.K8sSecretResource
Represents a Kubernetes secret. Derived from unfurl.nodes.K8sResource
.
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace
Properties
- data
(map) Name/value pairs that define the secret. Values will be marked as sensitive.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster. Data values will be marked as sensitive.
- name
(string) The Kubernetes name of the resource.
unfurl.nodes.K8sRawResource
A Kubernetes resource that isn’t part of a namespace.
Requirements
- host
A node template of type
unfurl.nodes.K8sCluster
Properties
- definition
(map or string) The YAML definition for the Kubernetes resource.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster.
- name
(string) The Kubernetes name of the resource.
Supervisor
Supervisor is a light-weight process manager that is useful when you want to run local development instances of server applications.
Required TOSCA import: configurators/templates/supervisor.yaml
(in the unfurl
repository)
unfurl.nodes.Supervisor
TOSCA type that represents an instance of Supervisor process manager. Derived from tosca.nodes.SoftwareComponent
.
properties
- homeDir
(string) The location the Supervisor configuration directory (default:
{get_dir: local}
)- confFile
(string) Name of the confiration file to create (default:
supervisord.conf
)- conf
(string) The supervisord configuration. A default one will be generated if omitted.
unfurl.nodes.ProcessController.Supervisor
TOSCA type that represents a process (“program” in supervisord terminology) that is managed by a Supervisor instance. Derived from unfurl.nodes.ProcessController
.
requirements
- host
A node template of type
unfurl.nodes.Supervisor
.
properties
- name
(string) The name of this program.
- program
(map) A map of settings for this program.