Configurators
Configurators is a software plugin that implements an operation and applies changes to instances. There are built-in configurators for shell scripts, Ansible playbooks, Terraform configurations, and Kubernetes resources or you can include your own as part of your blueprint.
To use a configurator, set it in the implementation field of an Operation
and set its inputs as documented below. Configurator names are case-sensitive;
if a configurator name isn’t found it is treated as an external command.
If you set an external command line directly as the implementation, Unfurl will choose the appropriate one to use.
If operation_host is local it will use the Shell configurator, if it is remote,
it will use the Ansible configurator and generate a playbook that invokes it on the remote machine:
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    topology_template:
      node_templates:
        test_remote:
          type: tosca:Root
          interfaces:
            Standard:
              configure: echo "abbreviated configuration"
@operation(name="configure")
def test_remote_configure(self, **kw):
    return unfurl.configurators.shell.ShellConfigurator(
        command='echo "abbreviated configuration"',
    )
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Configurators are fairly low-level. You can use “Installer” nodes that. Artifacts
Available configurators include:
Ansible
The Ansible configurator executes the given playbook. You can access the same Unfurl filters and queries available in the Ensemble manifest from inside a playbook.
Example
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    topology_template:
      node_templates:
        test_remote:
          type: tosca:Root
          interfaces:
            Standard:
              configure:
                implementation: Ansible
                inputs:
                  playbook:
                    # quote this yaml so its templates are not evaluated before we pass it to Ansible
                    q:
                      - set_fact:
                          fact1: "{{ '.name' | eval }}"
                      - name: Hello
                        command: echo "{{ fact1 }}"
                outputs:
                  fact1:
@operation(name="configure", outputs={"fact1": None})
def test_remote_configure(self, **kw):
    return unfurl.configurators.ansible.AnsibleConfigurator(
        playbook="""
        - set_fact:
            fact1: "{{ '.name' | eval }}"
        - name: Hello
          command: echo "{{ fact1 }}"
        """
    )
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Inputs
- playbook
(required) If string, treat as a file path to the Ansible playbook to run, otherwise treat as an inline playbook
- inventory
If string, treat as a file path to an Ansible inventory file or directory, otherwise treat as in inline YAML inventory. If omitted, the inventory will be generated (see below)
- arguments
A dictionary of variables that will be passed to the playbook as Ansible facts. See Arguments
- playbookArgs
A list of strings that will be passed to
ansible-playbookas command-line arguments- resultTemplate
Same behavior as defined for Shell but will also include
outputsas a variable.
Outputs
Keys declared as outputs are used as the names of the Ansible facts to be extracted after the playbook completes.
implementation key notes
- operation_host
If set, names the Ansible host.
- environment
If set, environment directives will processed and passed to the playbook’s
environment
Playbook processing
The playbook input can be set to a full playbook or a list of tasks. If inventory is auto-generated and the “hosts” keyword is empty or missing from the playbook, “hosts” will be set to the host found in the auto-generated inventory, as described below.
Inventory
If an inventory file isn’t specified in inputs, Unfurl will generate an Ansible inventory for the target host. The target host will be selected by searching for a node in the following order:
- The - operation_hostif explicitly set.
- The current target if it looks like a host (i.e. has an Ansible or SSH endpoint or is a Compute resource) 
- Search the current target’s - hostedOnrelationship for a node that looks like a host.
- Fallback to “localhost” with a local ansible connection. 
The inventory facts for the selected host is built from the following sources:
- If host has an - endpointof type- unfurl.capabilities.Endpoint.SSHor- unfurl.capabilities.Endpoint.Ansibleuse that capability’s- host,- port,- connection,- user, and- hostvarsproperties.
- If there is a relationship template or connection of type - unfurl.relationships.ConnectsTo.Ansiblethat targets the endpoint, uses its- credentialand- hostvarsproperties. (These can be set in the environment’s Connections section.)
- If the host is declared as a member of group of type - unfurl.groups.AnsibleInventoryGroupin the service template, the group’s name will be added as an ansible group along with the contents of the group’s- hostvarsproperty.
- If - ansible_hostwasn’t previously set,- ansible_hostwill be set to the host’s public_ip or- private_ipin that order if present, otherwise set it to- localhost.
- If the host is a Google compute instance the host name will be set to - INSTANCE_NAME.ZONE.PROJECTe.g.- instance-1.us-central1-a.purple-sanctum-25912. This is for compatibility with the- gcloud compute config-sshcommand to enable Unfurl to use those credentials.
Execution environment
Unfurl runs Ansible in an environment isolated from your machine’s Ansible installation and will not load the ansible configuration files in the standard locations. If you want to load an Ansible configuration file set the
ANSIBLE_CONFIGenvironment variable. If you want Ansible to search standard locations set to an empty or invalid value likeANSIBLE_CONFIG=. (See also the Ansible Configurations Documentation)Note: Because Ansible is initialized at the beginning of execution, if the
--no-runtimecommand option is used or if no runtime is availableANSIBLE_CONFIGwill only be applied in the environment that executes Unfurl. It will not be applied if set via environment declaration.
Cmd
The Cmd configurator executes a shell command using either the Shell configurator
or the Ansible configurator for remote execution if the operation_host is set to a remote node.
As described above, Cmd is the default configurator if none is specified.
Example
In this example, operation_host is set to a remote instance so the command is executed remotely using Ansible.
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    topology_template:
      node_templates:
        test_remote:
          type: tosca:Root
          interfaces:
            Standard:
              configure:
                implementation:
                  primary: Cmd
                  operation_host: staging.example.com
                inputs:
                  cmd: echo "test"
import unfurl.configurators
import tosca
@tosca.operation(name="configure", operation_host="staging.example.com")
def test_remote_configure(self, **kw):
    return unfurl.configurators.CmdConfigurator(
        cmd='echo "test"',
    )
test_remote = tosca.nodes.Root()
test_remote.set_operation(test_remote_configure)
Delegate
The delegate configurator will delegate the current operation to the specified one.
Inputs
- operation
(required) The operation to delegate to, e.g.
Standard.configure- target
The name of the instance to delegate to. If omitted the current target will be used.
- inputs
Inputs to pass to the operation. If omitted the current inputs will be used.
- when
If set, only perform the delegated operation if its value evaluates to true.
Shell
The Shell configurator executes a shell command.
Inline shell script example
This example executes an inline shell script and uses the cwd and shell input options.
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    topology_template:
      node_templates:
        shellscript-example:
          type: tosca:Root
          interfaces:
            Standard:
              configure:
                implementation: |
                  if ! [ -x "$(command -v testvars)" ]; then
                    source testvars.sh
                  fi
                inputs:
                    cwd: '{{ "project" | get_dir }}'
                    keeplines: true
                    # our script requires bash
                    shell: '{{ "bash" | which }}'
@operation(name="configure")
def shellscript_example_configure(self, **kw):
    return unfurl.configurators.shell.ShellConfigurator(
        command='if ! [ -x "$(command -v testvars)" ]; then\n  source testvars.sh\nfi\n',
        cwd=Eval('{{ "project" | get_dir }}'),
        keeplines=True,
        shell=Eval('{{ "bash" | which }}'),
    )
shellscript_example = tosca.nodes.Root()
shellscript_example.set_operation(shellscript_example_configure)
Example with artifact
Declaring an artifact of a type that is associated with the shell configurator ensures Unfurl will install the artifact if necessary, before it runs the command.
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    imports:
    - repository: unfurl
      file: tosca_plugins/artifacts.yaml
    node_types:
      artifact_example:
          derived_from: tosca:Root
          artifacts:
            ripgrep:
              type: artifact.AsdfTool
              file: ripgrep
              properties:
                version: 13.0.0
          interfaces:
            Standard:
              configure:
                implementation: 
                  primary: ripgrep
                inputs:
                  cmd: rg search
from unfurl.tosca_plugins.artifacts import artifact_AsdfTool
import tosca
class artifact_example(tosca.nodes.Root):
    ripgrep: artifact_AsdfTool = artifact_AsdfTool(
        "ripgrep",
        version="13.0.0",
        file="ripgrep",
    )
    def configure(self, **kw):
        return self.ripgrep.execute(
            cmd="rg search",
        )
Inputs
- command
(required) The command to execute It can be either a string or a list of command arguments.
- arguments
A map of arguments to pass to the command.
- cwd
Set the current working directory to execute the command in.
- dryrun
During a during a dryrun job this will be either appended to the command line or replace the string
%dryrun%if it appears in the command. (%dryrun%is stripped out when running regular jobs.) If not set, the task will not be executed at all during a dry run job.- shell
If a string, the executable of the shell to execute the command in (e.g.
/usr/bin/bash). A boolean indicates whether the command if invoked through the default shell or not. If omitted, it will be set to true ifcommandis a string or false if it is a list.- echo
A boolean that indicates whether or not should be standard output (and stderr) should be echoed to Unfurl’s stdout while the command is being run. If omitted, true unless running with
--quiet. (Doesn’t affect the capture of stdout and stderr.)- input
Optional string to pass as stdin.
- keeplines
(Default: false) If true, preserve line breaks in the given command.
- done
- outputsTemplate
A Jinja2 template or runtime expression that is processed after shell command completes, with same variables as
resultTemplate. The template should evaluate to a map to be used as the operation’s outputs or null to skip.- resultTemplate
A Jinja2 template or runtime expression that is processed after shell command completes, it will have the following template variables:
Result template variables
All values will be either string or null unless otherwise noted.
- success
true unless an error occurred or the returncode wasn’t 0
- cmd
(string) The command line that was executed
- stdout
- stderr
- returncode
Integer (Null if the process didn’t complete)
- error
Set if an exception was raised
- timeout
(Null unless a timeout occurred)
The processing of the resultTemplate is equivalent to passing its resulting YAML to update_instances.
Outputs
No outputs are set unless outputsTemplate is present.
Template
The template configurator lets you implement an operation entirely within the template.
Inputs
- run
Sets the
resultof this task.- dryrun
During a
--dryrunjob used instead ofrun.- done
If set, a map whose values are passed as arguments to
unfurl.configurator.TaskView.done(). Embedded runtime expressions can access the previous value of those arguments as variables.- resultTemplate
A Jinja2 template or runtime expression that is processed with results of
runas its variables.
Outputs
Operation outputs are set from the outputs key on the done input if present.
Terraform
The Terraform configurator will be invoked on any Node Template with the type unfurl.nodes.Installer.Terraform.
It can also be used to implement any operation regardless of the node type by setting the implementation to Terraform.
It will invoke the appropriate terraform command (e.g “apply” or “destroy”) based on the job’s workflow.
Unless you set the stateLocation input parameter to “remote”, the Terraform configurator manages the Terraform state file itself
and commits it to the ensemble’s repository so you don’t use Terraform’s remote state – it will be self-contained and sharable like the rest of the Ensemble.
Any sensitive state will be encrypted using Ansible Vault.
During a --dryrun job the configurator will validate and generate the Terraform plan but not execute it. You can override this behavior with the dryrun_mode input parameter and you can specify dummy outputs to use with the dryrun_outputs input parameter.
You can use the unfurl.nodes.Installer.Terraform node type with your node template to the avoid boilerplate and set the needed inputs.
Example
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    imports:
    - repository: unfurl
      file: tosca_plugins/artifacts.yaml
    topology_template:
      node_templates:
        terraform-example:
          type: unfurl.nodes.Installer.Terraform
          interfaces:
            defaults:
              inputs:
                tfvars:
                  tag: test
                main: |
                  variable "tag" {
                    type        = string
                  }
                  output "name" {
                    value = var.tag
                  }
            Standard:
              operations:
                configure:
import tosca
import unfurl
from unfurl.tosca_plugins.artifacts import unfurl_nodes_Installer_Terraform
import unfurl.configurators.terraform
@tosca.operation(
    name="default", apply_to=["Install.check", "Standard.configure", "Standard.delete"]
)
def terraform_example_default(self, **kw):
    return unfurl.configurators.terraform.TerraformConfigurator(
        tfvars={"tag": "test"},
        main="""
      variable "tag" {
        type  = string
      }
      output "name" {
        value = var.tag
      }""",
    )
terraform_example = unfurl_nodes_Installer_Terraform()
terraform_example.set_operation(terraform_example_default)
Inputs
- main
The contents of the root Terraform module or a path to a directory containing the Terraform configuration. If it is a directory path, the configurator will treat it as a local Terraform module. Otherwise, if
mainis a string it will be treated as HCL and if it is a map, it will be written out as JSON. (See the note below about HCL in YAML.) If omitted, the configurator will look inget_dir("spec.home")for the Terraform configuration.- tfvars
A map of Terraform variables to passed to the main Terraform module or a string equivalent to “.tfvars” file.
- stateLocation
If set to “secrets” (the default) the Terraform state file will be encrypted and saved into the instance’s “secrets” folder. If set to “artifacts”, it will be saved in the instance’s “artifacts” folder with only sensitive values encrypted inline. If set to “remote”, Unfurl will not manage the Terraform state at all.
- command
Path to the
terraformexecutable. Default: “terraform”- dryrun_mode
How to run during a dry run job. If set to “plan” just generate the Terraform plan. If set to “real”, run the task without any dry run logic. Default: “plan”
- dryrun_outputs
During a dry run job, this map of outputs will be used simulate the task’s outputs (otherwise outputs will be empty).
- resultTemplate
A Jinja2 template or runtime expression that is processed with the Terraform state JSON file as its variables as well as the Result template variables variables documented above for Shell. See the Terraform providers’ schema documentation for details but top-level keys will include “resources” and “outputs”.
Outputs
Specifies which outputs defined by the Terraform module that will be set as the operation’s outputs. If omitted and the Terraform configuration is specified inline, all of the Terraform outputs will be included. But if a Terraform configuration directory was specified instead, its outputs need to be declared here to be exposed.
tfvar and tfoutput Metadata
You can automatically map properties and attributes to a Terraform variables and outputs by setting tfvar and tfoutput keys in the property and attribute metadata, respectively. For example:
node_types:
  ExampleTerraformManagedResource:
    derived_from: tosca.nodes.Root
    description: A type of resource that is managed (create/update/delete) by a terraform
      resource
    properties:
      example_terraform_var:
        type: string
        description: A property the UI will render for user input
        metadata:
          # TOSCA properties are conceptually similar to Terraform variables.
          # This declares that this property will set as a terraform variable with the same name:
          tfvar: true
    attributes:
      example_terraform_output:
        type: string
        metadata:
          # TOSCA attributes are conceptually similar to Terraform outputs.
          # This declares that this attribute will be set to the terraform output with the same name:
          tfoutput: true
    interfaces:      
      Install:
        operations:
          check:
      Standard:
        operations:
          configure:
          delete:
      defaults:
        # set this operation for the basic CRUD operations in the TOSCA deploy workflow
        implementation:
          className: unfurl.configurators.terraform.TerraformConfigurator
        inputs:
          main: terraform
from unfurl.configurators.terraform import (
    TerraformConfigurator,
    TerraformInputs,
)
from unfurl.tosca_plugins.expr import tfvar, tfoutput
import tosca
class ExampleTerraformManagedResource(tosca.nodes.Root):
    """A type of resource that is managed (create/update/delete) by a terraform resource"""
    # TOSCA properties are conceptually similar to Terraform variables.
    # Set the tfvar option to indicate the property should set a Terraform variable with the same name.
    example_terraform_var: str = tosca.Property(options=tfvar)
    """A property the UI will render for user input"""
    # TOSCA attributes are conceptually similar to Terraform outputs.
    # Set the tfoutput option to indicate the attribute should be set to the terraform output with the same name.
    example_terraform_output: str = tosca.Attribute(options=tfoutput)
    # call this function for the basic CRUD operations in the TOSCA deploy workflow
    @tosca.operation(apply_to=["Install.check", "Standard.configure", "Standard.delete"])
    def default(self, **kw) -> TerraformConfigurator:
        # Implement these operations using the Terraform module found in the directory named "terraform"
        return TerraformConfigurator(TerraformInputs(main="terraform"))
Environment Variables
If the TF_DATA_DIR environment variable is not defined it will be set to .terraform relative to the current working directory.
Note on HCL in YAML
The json representation of the Terraform’s HashiCorp Configuration Language (HCL) is quite readable when serialized as YAML:
Example 1: variable declaration
variable "example" {
  default = "hello"
}
Becomes:
variable:
  example:
    default: hello
Example 2: Resource declaration
resource "aws_instance" "example" {
  instance_type = "t2.micro"
  ami           = "ami-abc123"
}
becomes:
resource:
  aws_instance:
   example:
    instance_type: t2.micro
    ami:           ami-abc123
Example 3: Resource with multiple provisioners
resource "aws_instance" "example" {
  provisioner "local-exec" {
    command = "echo 'Hello World' >example.txt"
  }
  provisioner "file" {
    source      = "example.txt"
    destination = "/tmp/example.txt"
  }
  provisioner "remote-exec" {
    inline = [
      "sudo install-something -f /tmp/example.txt",
    ]
  }
}
Multiple provisioners become a list:
resource:
  aws_instance:
    example:
      provisioner:
        - local-exec
            command: "echo 'Hello World' >example.txt"
        - file:
            source: example.txt
            destination: /tmp/example.txt
        - remote-exec:
            inline: ["sudo install-something -f /tmp/example.txt"]
You can convert HCL to JSON and YAML using tools like hcl2json and yq, for example:
hcl2json main.tf | yq -P -oyaml
Expressing terraform modules as YAML or JSON instead of HCL exposes the terraform in a structured way, making it easier to provide extensibility.
For example, a derived node template or artifact could add or update terraform resources defined on the base type: In the example below, a derived type customizes its base type’s main property without have to replace its entire definition.
main: "{{ '.super::main' | eval | combine(SELF.custom_changes, recursive=True, list_merge='append_rp') }}"
Here we merge the base class’s property using .super with a property called custom_changes using Ansible Jinja2’s combine filter which lets you recursively merge maps and lists.
Installers
Installer node types already have operations defined. You just need to import the service template containing the TOSCA type definitions and declare node templates with the needed properties and operation inputs.
Docker
Required TOSCA import: configurators/templates/docker.yaml (in the unfurl repository)
unfurl.nodes.Container.Application.Docker
TOSCA node type that represents a Docker container.
artifacts
- image
(required) An artifact of type
tosca.artifacts.Deployment.Image.Container.Docker
By default, the configurator will assume the image is in https://registry.hub.docker.com.
If the image is in a different registry you can declare it as a repository and have the image artifact reference that repository.
Inputs
- configuration
A map that will included as parameters to Ansible’s Docker container module They are enumerated here
Example
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    imports:
    - repository: unfurl
      file: configurators/templates/docker.yaml
    topology_template:
      node_templates:
        hello-world-container:
          type: unfurl.nodes.Container.Application.Docker
          artifacts:
            image:
              type: tosca.artifacts.Deployment.Image.Container.Docker
              file: busybox
          interfaces:
            Standard:
              inputs:
                configuration:
                  command: ["echo", "hello world"]
                  detach: no
                  output_logs: yes
from unfurl.configurators.templates.docker import (
    unfurl_nodes_Container_Application_Docker,
)
import tosca
hello_world_container = unfurl_nodes_Container_Application_Docker(
    "hello-world-container",
    image=tosca.artifacts.DeploymentImageContainerDocker(
        "image",
        file="busybox",
    ),
)
hello_world_container._Standard_default_inputs = dict(
    configuration=dict(command=["echo", "hello world"], detach=False, output_logs=True)
)
DNS
The DNS installer support nearly all major DNS providers using OctoDNS.
Required TOSCA import: configurators/templates/dns.yaml (in the unfurl repository)
unfurl.nodes.DNSZone
TOSCA node type that represents a DNS zone.
Properties
- name
(required) DNS hostname of the zone (should end with “.”).
- provider
(required) A map containing the OctoDNS provider configuration
- records
A map of DNS records to add to the zone (default: an empty map)
- exclusive
Set to true if the zone is exclusively managed by this instance (removes unrecognized records) (default: false)
Attributes
- zone
A map containing the records found in the live zone
- managed_records
A map containing the current records that are managed by this instance
unfurl.relationships.DNSRecords
TOSCA relationship type to connect a DNS record to a DNS zone. The DNS records specified here will be added, updated or removed from the zone when the relationship is established, changed or removed.
Properties
- records
(required) A map containing the DNS records to add to the zone.
Example
apiVersion: unfurl/v1.0.0
kind: Ensemble
spec:
  service_template:
    imports:
    - repository: unfurl
      file: configurators/templates/dns.yaml
    topology_template:
      node_templates:
        example_com_zone:
          type: unfurl.nodes.DNSZone
          properties:
            name: example.com.
            provider:
              # Amazon Route53 (Note: this provider requires that the zone already exists.)
              class: octodns.provider.route53.Route53Provider
        test_app:
          type: tosca.nodes.WebServer
          requirements:
            - host: compute
            - dns:
                node: example_com_zone
                relationship:
                  type: unfurl.relationships.DNSRecords
                  properties:
                    records:
                      www:
                        type: A
                        value:
                          # get the ip address of the Compute instance that this is hosted on
                          eval: .source::.requirements::[.name=host]::.target::public_address
import unfurl
import tosca
from tosca import Eval
from unfurl.configurators.templates.dns import unfurl_nodes_DNSZone, unfurl_relationships_DNSRecords
example_com_zone = unfurl_nodes_DNSZone(
    name="example.com.",
    provider={"class": "octodns.provider.route53.Route53Provider"},
)
test_app = tosca.nodes.WebServer(
    host=[tosca.find_node("compute")],
)
test_app.dns = unfurl_relationships_DNSRecords(
    records=Eval(
        {
            "www": {
                "type": "A",
                "value": {
                    "eval": ".source::.requirements::[.name=host]::.target::public_address"
                },
            }
        }
    ),
)[example_com_zone]
Helm
Requires Helm 3, which will be installed automatically if missing.
Required TOSCA import: configurators/templates/helm.yaml (in the unfurl repository)
unfurl.nodes.HelmRelease
TOSCA type that represents a Helm release. Deploying or discovering a Helm release will add to the ensemble any Kubernetes resources managed by that release.
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace- repository
A node template of type
unfurl.nodes.HelmRepository
Properties
- release_name
(required) The name of the helm release
- chart
The name of the chart (default: the instance name)
- chart_values
A map of chart values
Inputs
All operations can be passed the following input parameters:
- flags
A list of flags to pass to the
helmcommand
unfurl.nodes.HelmRepository
TOSCA node type that represents a Helm repository.
Properties
- name
The name of the repository (default: the instance name)
- url
(required) The URL of the repository
Kubernetes
Use these types to manage Kubernetes resources.
unfurl.nodes.K8sCluster
TOSCA type that represents a Kubernetes cluster. Its attributes are set by introspecting the current Kubernetes connection (unfurl.relationships.ConnectsTo.K8sCluster).
There are no default implementations defined for creating or destroying a cluster.
Attributes
- apiServer
The url used to connect to the cluster’s api server.
unfurl.nodes.K8sNamespace
Represents a Kubernetes namespace. Destroying a namespace deletes any resources in it.
Derived from unfurl.nodes.K8sRawResource.
Requirements
- host
A node template of type
unfurl.nodes.K8sCluster
Properties
- name
The name of the namespace.
unfurl.nodes.K8sResource
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace
Properties
- definition
(map or string) The YAML definition for the Kubernetes resource.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster.
- name
(string) The Kubernetes name of the resource.
unfurl.nodes.K8sSecretResource
Represents a Kubernetes secret. Derived from unfurl.nodes.K8sResource.
Requirements
- host
A node template of type
unfurl.nodes.K8sNamespace
Properties
- data
(map) Name/value pairs that define the secret. Values will be marked as sensitive.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster. Data values will be marked as sensitive.
- name
(string) The Kubernetes name of the resource.
unfurl.nodes.K8sRawResource
A Kubernetes resource that isn’t part of a namespace.
Requirements
- host
A node template of type
unfurl.nodes.K8sCluster
Properties
- definition
(map or string) The YAML definition for the Kubernetes resource.
Attributes
- apiResource
(map) The YAML representation for the resource as retrieved from the Kubernetes cluster.
- name
(string) The Kubernetes name of the resource.
Supervisor
Supervisor is a light-weight process manager that is useful when you want to run local development instances of server applications.
Required TOSCA import: configurators/templates/supervisor.yaml (in the unfurl repository)
unfurl.nodes.Supervisor
TOSCA type that represents an instance of Supervisor process manager. Derived from tosca.nodes.SoftwareComponent.
properties
- homeDir
(string) The location the Supervisor configuration directory (default:
{get_dir: local})- confFile
(string) Name of the confiration file to create (default:
supervisord.conf)- conf
(string) The supervisord configuration. A default one will be generated if omitted.
unfurl.nodes.ProcessController.Supervisor
TOSCA type that represents a process (“program” in supervisord terminology) that is managed by a Supervisor instance. Derived from unfurl.nodes.ProcessController.
requirements
- host
A node template of type
unfurl.nodes.Supervisor.
properties
- name
(string) The name of this program.
- program
(map) A map of settings for this program.
Artifacts
Instead of setting an operation’s Implementation to a configurator, you can set it to an artifact. Using an artifact allows you to reuse an implementation with more than one operation. For example, you can create artifacts for specific Terraform modules, Ansible playbooks, or executables.
You define an execute operation on an artifact’s type or template definition to specify the inputs and outputs that can be passed to the artifact’s configurator. How the inputs and outputs are used depends on the artifact’s type. For example, with a Terraform module artifact, its inputs will be used as the Terraform module’s variables and its outputs the Terraform module’s outputs. Or with a shell executable artifact, the inputs specify the command line arguments passed to the executable.
The example below declares an artifact that represents a shell script and shows how an operation can invoke the artifact and pass values to it.
import unfurl
import tosca
from tosca import ToscaOutputs, Attribute, Eval
class MyArtifact(unfurl.artifacts.ShellExecutable):
    file: str = "myscript.sh"
    # evaluates the script's output
    outputsTemplate = Eval("{{ stdout | from_json }}")
    class Outputs(ToscaOutputs):
        output1: str = Attribute()
        output2: int = Attribute()
    # no implementation is needed for execute() because:
    # - execute() arguments are passed as command line arguments by default
    # - MyArtifact.Outputs is constructed from the json map returned by outputsTemplate
    def execute(self, arg1: str, arg2: int) -> Outputs:
        return MyArtifact.Outputs()
class MyNode(tosca.nodes.Root):
    prop: int
    def configure(self) -> MyArtifact.Outputs:
        return MyArtifact(input="y").execute("hello", self.prop)
my_node = MyNode(prop=1)
tosca_definitions_version: tosca_simple_unfurl_1_0_0
topology_template:
  node_templates:
    my_node:
      type: MyNode
      properties:
        prop1: foo
        prop2: 1
      metadata:
        module: docs.examples.artifact2
artifact_types:
  MyArtifact:
    derived_from: unfurl.artifacts.ShellExecutable
    properties:
      outputsTemplate:
        type: any
        required: false
        default: '{{ stdout | from_json }}'
    interfaces:
      Executable:
        type: unfurl.interfaces.Executable
        operations:
          execute:
            metadata:
              output_match:
              - Outputs
            inputs:
              arg1:
                type: string
              arg2:
                type: integer
            outputs:
              output1:
                type: string
              output2:
                type: integer
node_types:
  MyNode:
    derived_from: tosca.nodes.Root
    properties:
      prop1:
        type: string
      prop2:
        type: integer
    interfaces:
      Standard:
        operations:
          configure:
            metadata:
              output_match:
              - Outputs
              arguments:
              - arg1
              - arg2
            inputs:
              arg1: hello
              arg2:
                eval: .::prop2
            outputs:
              output1:
                type: string
              output2:
                type: integer
            implementation:
              primary:
                type: MyArtifact
                properties:
                  input: y
                file: myscript.sh
Like other TOSCA operations, when generating TOSCA YAML, Unfurl looks at the execute method’s signature to determine its TOSCA inputs and outputs. So in simple cases the method can be a no-op. In more complex cases, you can use the method to validate or transform the inputs before they are passed to the configurator. In that case, you can a call set_inputs in the method body, which will force the method to be invoked at runtime when the operation’s task is created in a job’s planning stage.
Or you could implement completely custom behavior by having the execute method return a run method using the pattern shown in this example.
Arguments
When a node operation invokes its implementation artifact’s execute operation, Unfurl looks for a operation input named arguments to pass as the execute operation’s inputs.
This should be dictionary whose keys and values corresponds to the execute operation’s input specification.
If an arguments input isn’t explicitly declared, it will be synthesized from the following sources (listed here from lowest to highest merge order):
- Default input values defined for the execution operation. 
- Properties on the node template and on the implementation artifact if they have matching - input_matchmetadata keys in their definitions (see Shared Properties).
- The operation’s inputs whose names are listed in the operation’s - argumentsmetadata key, if set. The Python DSL generates this based on the operation method’s call to the- executemethod, as shown in the example above.
- If the - argumentsmetadata key is missing, operation inputs whose name matches execute a operation’s input name or one of the above matching property names.
Abstract artifacts
You can define abstract artifact types that just define the inputs and outputs it expects by defining an artifact type with an execute operation that doesn’t have an implementation declared. Artifacts can implement that by, for example, by using multiple inheritance to inherit both the abstract artifact type and a concrete artifact type like unfurl.artifacts.TerraformModule.
This way a node type can declare operations with abstract artifacts and node templates or a node subclass can set a concrete artifact without having to reimplement the operations that use it – with the assurance that the static type checker will check that operation signatures are compatible.
The example below defines a node type specifies the abstract artifact type its configuration operation will use and a node template that uses a concrete artifact that implements the abstract artifact type.
import unfurl
import tosca
class ClusterOp(unfurl.artifacts.Executable):
    file: str = "kubernetes"
    def execute(self, prop1: str, prop2: int):
        pass
class DOCluster(tosca.nodes.Root):
    cluster_config: "ClusterOp"
    my_property: str = "default"
    def configure(self, **kw):
        return self.cluster_config.execute(prop1=self.my_property, prop2=0)
class ClusterTerraform(unfurl.artifacts.TerraformModule, ClusterOp):
    file: str = "main.tf"
mycluster = DOCluster(cluster_config=ClusterTerraform())
tosca_definitions_version: tosca_simple_unfurl_1_0_0
topology_template:
  node_templates:
    mycluster:
      type: DOCluster
      artifacts:
        cluster_config:
          type: ClusterTerraform
          file: main.tf
      metadata:
        module: docs.examples.artifact3
artifact_types:
  ClusterOp:
    derived_from: unfurl.artifacts.Executable
    interfaces:
      Executable:
        type: unfurl.interfaces.Executable
        operations:
          execute:
            inputs:
              prop1:
                type: string
              prop2:
                type: integer
  ClusterTerraform:
    derived_from:
    - unfurl.artifacts.TerraformModule
    - ClusterOp
node_types:
  DOCluster:
    derived_from: tosca.nodes.Root
    artifacts:
      cluster_config:
        type: ClusterOp
    properties:
      my_property:
        type: string
        default: default
    interfaces:
      Standard:
        operations:
          configure:
            inputs:
              prop1:
                eval: .::my_property
              prop2: 0
            metadata:
              arguments:
              - prop1
              - prop2
            implementation:
              primary: cluster_config