Python API

API for writing configurators

class unfurl.configurator.Configurator(configurationSpec)
can_dry_run(task)

Returns whether this configurator can handle a dry-runs for the given task. (And should check TaskView.dry_run in during run().

Parameters

task (TaskView) –

Returns

bool

can_run(task)

Return whether or not the configurator can execute the given task depending on if this configurator support the requested action and parameters and given the current state of the target instance?

Parameters

task (TaskView) –

Returns

Should return True or a message describing why the task couldn’t be run.

Return type

(bool or str)

check_digest(task, changeset)

Examine the previous ChangeRecord generated by the previous time this operation was performed on the target instance and return whether it should be rerun or not.

The default implementation recalculates the digest of input parameters that were accessed in the previous run.

Parameters
  • task (TaskView) –

  • changest (ChangeRecord) –

Returns

True if configuration’s digest has changed, False if it is the same.

Return type

bool

exclude_from_digest = ()
get_generator(task)
render(task)

This method is called is called during the planning phase to give the configurator an opportunity to do early validation and error detection and generate any plan information or configuration files that the user may want to review before the running the deployment task.

Property access and writes will be tracked and used to establish dynamic dependencies between instances so the plan can be ordered properly. Any updates made to instances maybe reverted if it has dependencies on attributes that might be changed later in the plan, so this method should be idempotent.

Returns

The value returned here will subsequently be available as task.rendered

run(task)

This should perform the operation specified in the ConfigurationSpec on the task.target.

Parameters

task (TaskView) –

Yields

Should yield either a JobRequest, TaskRequest or a ConfiguratorResult when done

save_digest(task)

Generate a compact, deterministic representation of the current configuration. This is saved in the job log and used by check_digest in subsequent jobs to determine if the configuration changed the operation needs to be re-run.

The default implementation calculates a SHA1 digest of the values of the inputs that where accessed while that task was run, with the exception of the input parameters listed in exclude_from_digest.

Parameters

task (TaskView) –

Returns

A dictionary whose keys are strings that start with “digest”

Return type

dict

classmethod set_config_spec_args(kw, target)
Parameters

kw (dict) –

short_name = None

shortName can be used to customize the “short name” of the configurator as an alternative to using the full name (“module.class”) when setting the implementation on an operation. (Titlecase recommended)

should_run(task)

Does this configuration need to be run?

class unfurl.configurator.JobRequest(resources, errors=None, update=None)

Yield this to run a child job.

get_instance_specs()
property name
property root
property target
class unfurl.configurator.TaskRequest(configSpec, target, reason, persist=False, required=None, startState=None)

Yield this to run a child task. (see unfurl.configurator.TaskView.create_sub_task())

get_operation_artifacts()
property name
update_future_dependencies(completed)
class unfurl.configurator.TaskView(manifest, configSpec, target, reason=None, dependencies=None)

The interface presented to configurators.

The following public attributes are available:

target

The instance this task is operating on.

reason

The reason this operation was planned. See Reason

Type

str

cwd

Current working directory

Type

str

dry_run

Dry run only

Type

bool

verbose

Verbosity level set for this job (-1 error, 0 normal, 1 verbose, 2 debug)

Type

int

add_dependency(expr, expected=None, schema=None, name=None, required=True, wantList=False, target=None)
add_message(message)
apply_work_folders(*names)
create_sub_task(operation=None, resource=None, inputs=None, persist=False, required=None)

Create a subtask that will be executed if yielded by run()

Parameters
  • operation (str) – The operation call (like interface.operation)

  • resource (NodeInstance) –

Returns

TaskRequest

discard_work_folders()
done(success=None, modified=None, status=None, result=None, outputs=None, captureException=None)

run() should call this method and yield its return value before terminating.

>>> yield task.done(True)
Parameters
  • success (bool) – indicates if this operation completed without an error.

  • modified (bool) – (optional) indicates whether the physical instance was modified by this operation.

  • status (Status) – (optional) should be set if the operation changed the operational status of the target instance. If not specified, the runtime will updated the instance status as needed, based the operation preformed and observed changes to the instance (attributes changed).

  • result (dict) – (optional) A dictionary that will be serialized as YAML into the changelog, can contain any useful data about these operation.

  • outputs (dict) – (optional) Operation outputs, as specified in the topology template.

Returns

ConfiguratorResult

fail_work_folders()
find_connection(target, relation='tosca.relationships.ConnectsTo')
find_instance(name)
get_environment(addOnly)

Return a dictionary of environment variables applicable to this task.

Parameters

addOnly (bool) – If addOnly is False all variables in the current os environment will be included otherwise only variables added will be included.

Returns

dict:

Variable sources (by order of preference, lowest to highest): 1. The ensemble’s environment 2. Variables set by the connections that are available to this operation. 3. Variables declared in the operation’s environment section.

get_settings()
get_work_folder(location=None)
property inputs

Exposes inputs and task settings as expression variables, so they can be accessed like:

eval: $inputs::param

or in jinja2 templates:

{{ inputs.param }}

query(query, dependency=False, name=None, required=False, wantList=False, resolveExternal=True, strict=True, vars=None, throw=False)
remove_dependency(name)
sensitive(value)

Mark the given value as sensitive. Sensitive values will be encrypted or redacted when outputed.

Returns

A copy of the value converted the appropriate subtype of unfurl.logs.sensitive value or the value itself if it can’t be converted.

Return type

sensitive

set_work_folder(location='operation', preserve=None)
Return type

unfurl.projectpaths.WorkFolder

update_instances(instances)

Notify Unfurl of new or changes to instances made while the configurator was running.

Operational status indicates if the instance currently exists or not. This will queue a new child job if needed.

- name:     aNewInstance
  template: aNodeTemplate
  parent:   HOST
  attributes:
     anAttribute: aValue
  readyState:
    local: ok
    state: state
- name:     SELF
  attributes:
      anAttribute: aNewValue
Parameters

instances (list or str) – Either a list or string that is parsed as YAML.

Returns

To run the job based on the supplied spec

immediately, yield the returned JobRequest.

Return type

JobRequest

property vars

A dictionary of the same variables that are available to expressions when evaluating inputs.

Internal classes supporting the runtime.

class unfurl.support.NodeState(value)

An enumeration.

configured = 5
configuring = 4
created = 3
creating = 2
deleted = 11
deleting = 10
error = 12
initial = 1
started = 7
starting = 6
stopped = 9
stopping = 8
class unfurl.support.Priority(value)

An enumeration.

critical = 3
ignore = 0
optional = 1
required = 2
class unfurl.support.Reason
add = 'add'
degraded = 'degraded'
error = 'error'
force = 'force'
missing = 'missing'
prune = 'prune'
reconfigure = 'reconfigure'
run = 'run'
update = 'update'
upgrade = 'upgrade'
class unfurl.support.Status(value)

An enumeration.

absent = 5
degraded = 2
error = 3
ok = 1
pending = 4
unknown = 0
class unfurl.result.ChangeRecord(jobId=None, startTime=None, taskId=0, previousId=None, parse=None)

A ChangeRecord represents a job or task in the change log file. It consists of a change ID and named attributes.

A change ID is an identifier with this sequence of 12 characters: - “A” serves as a format version identifier - 7 alphanumeric characters (0-9, A-Z, and a-z) encoding the date and time the job ran. - 4 hexadecimal digits encoding the task id

classmethod format_log(changeId, attributes)

format: changeidtkey=valuetkey=value

log(attributes=None)

changeidtkey=valuetkey=value

Runtime module

This module defines the core model and implements the runtime operations of the model.

The state of the system is represented as a collection of Instances. Each instance have a status; attributes that describe its state; and a TOSCA template which describes its capabilities, relationships and available interfaces for configuring and interacting with it.

class unfurl.runtime.Operational

This is an abstract base class for Jobs, Resources, and Configurations all have a Status associated with them and all use the same algorithm to compute their status from their dependent resouces, tasks, and configurations

static aggregate_status(statuses, seen)

Returns: ok, degraded, pending or None

If there are no instances, return None If any required are not operational, return pending or error If any other are not operational or degraded, return degraded Otherwise return ok. (Instances with priority set to “ignore” are ignored.)

get_operational_dependencies()

Return an iterator of Operational object that this instance depends on to be operational.

has_changed(changeset)

Whether or not this object changed since the give ChangeRecord.

class unfurl.runtime.OperationalInstance(status=None, priority=None, manualOveride=None, lastStateChange=None, lastConfigChange=None, state=None)

A concrete implementation of Operational

get_operational_dependencies()

Return an iterator of Operational object that this instance depends on to be operational.

property local_status

The local_status property.

property manual_overide_status

The manualOverideStatus property.

property priority

The priority property.

property state

The state property.

APIs for controlling Unfurl

Localenv module

Classes for managing the local environment.

Repositories can optionally be organized into projects that have a local configuration.

By convention, the “home” project defines a localhost instance and adds it to its context.

class unfurl.localenv.LocalEnv(manifestPath=None, homePath=None, parent=None, project=None, can_be_empty=False, override_context=None)

This class represents the local environment that an ensemble runs in, including the local project it is part of and the home project.

find_git_repo(repoURL, revision=None)
find_or_create_working_dir(repoURL, revision=None, basepath=None)
find_path_in_repos(path, importLoader=None)

If the given path is part of the working directory of a git repository return that repository and a path relative to it

find_project(testPath)

Walk parents looking for unfurl.yaml

get_context(context=None)

Return a new context that merges the given context with the local context.

get_external_manifest(location)
get_local_instance(name, context)
get_manifest(path=None, skip_validation=False)
get_paths()

If asdf is installed, build a PATH list from .toolversions found in the current project and the home project.

get_project(path, homeProject)
get_runtime()
get_vault_password(vaultId='default')
map_value(val)

Evaluate using project home as a base dir.

parent = None
project = None
class unfurl.localenv.Project(path, homeProject=None)

A Unfurl project is a folder that contains at least a local configuration file (unfurl.yaml), one or more ensemble.yaml files which maybe optionally organized into one or more git repositories.

create_working_dir(gitUrl, ref=None)
find_ensemble_by_name(name)
find_ensemble_by_path(path)
find_git_repo(repoURL, revision=None)
find_git_repo_from_repository(repoSpec)
find_or_clone(repo)
find_or_create_working_dir(repoURL, revision=None)
static find_path(testPath, stopPath=None)

Walk parents looking for unfurl.yaml

find_path_in_repos(path, importLoader=None)

If the given path is part of the working directory of a git repository return that repository and a path relative to it

get_asdf_paths(asdfDataDir, toolVersions={})
get_context(contextName, context=None)
get_default_context()
get_default_project_path(context_name)
get_managed_project(location, localEnv)
get_manifest_path(localEnv, manifestPath, can_be_empty)
static get_name_from_dir(projectRoot)
get_relative_path(path)
get_unique_path(name)
get_vault_password(contextName=None, vaultId='default')
get_vault_passwords(contextName=None)
is_path_in_project(path)
make_vault_lib(contextName=None)
property name
static normalize_path(path)
register_ensemble(manifestPath, *, project=None, managedBy=None, context=None)
search_for_manifest(can_be_empty)
property venv

Job module

A Job is generated by comparing a list of specs with the last known state of the system. Job runs tasks, each of which has a configuration spec that is executed on the running system Each task tracks and records its modifications to the system’s state

class unfurl.job.ConfigChange(parentJob=None, startTime=None, status=None, previousId=None, **kw)

Represents a configuration change made to the system. It has a operating status and a list of dependencies that contribute to its status. There are two kinds of dependencies:

  1. Live resource attributes that the configuration’s inputs depend on.

  2. Other configurations and resources it relies on to function properly.

get_operational_dependencies()

Return an iterator of Operational object that this instance depends on to be operational.

class unfurl.job.Job(manifest, rootResource, jobOptions, previousId=None)

runs ConfigTasks and child Jobs

can_run_task(task)

Checked at runtime right before each task is run

  • validate inputs

  • check pre-conditions to see if it can be run

  • check task if it can be run

get_operational_dependencies()

Return an iterator of Operational object that this instance depends on to be operational.

run_task(task, depth=0)

During each task run: * Notification of metadata changes that reflect changes made to resources * Notification of add or removing dependency on a resource or properties of a resource * Notification of creation or deletion of a resource * Requests a resource with requested metadata, if it doesn’t exist, a task is run to make it so (e.g. add a dns entry, install a package).

Returns a task.

should_run_task(task)

Checked at runtime right before each task is run

class unfurl.job.JobOptions(**kw)

Options available to select which tasks are run, e.g. read-only

does the config apply to the operation? is it out of date? is it in a ok state?

unfurl.job.run_job(manifestPath=None, _opts=None)

Loads the given Ensemble and creates and runs a job.

Parameters
  • manifestPath (str, optional) – If None, it will look for an ensemble in the current working directory.

  • _opts (dict, optional) – the names of the command line options for creating jobs.

Returns

The job that just ran or None if it couldn’t be created.

Return type

(Job)

Init module

This module implements creating and cloning project and ensembles as well Unfurl runtimes.

unfurl.init.clone(source, dest, ensemble_name='ensemble', **options)

Clone the source ensemble to dest. If dest isn’t in a project, create one. source can point to an ensemble_template, a service_template, an existing ensemble or a folder containing one of those. If it points to a project its default ensemble will be cloned.

Referenced Repositories will be cloned if a git repository or copied if a regular file folder, If the folders already exist they will be copied to new folder unless the git repositories have the same HEAD. but the local repository names will remain the same.

dest

result

Inside source project

new ensemble

missing or empty folder

clone project, new or cloned ensemble

another project

new or cloned ensemble with source as spec

non-empty folder

error

Utility classes and functions

class unfurl.logs.sensitive

Base class for marking a value as sensitive. Depending on the context, sensitive values will either be encrypted or redacted when outputed.

exception unfurl.util.UnfurlError(message, saveStack=False, log=False)
exception unfurl.util.UnfurlTaskError(task, message, log=40)
unfurl.util.filter_env(rules, env=None, addOnly=False)

Applies the given list of rules to a dictionary of environment variables and returns a new dictionary.

Parameters
  • rules (dict) – A dictionary of rules for adding, removing and filtering environment variables.

  • env (dict, optional) – The environment to apply the give rules to. If env is None it will be set to os.environ. Defaults to None.

  • addOnly (bool, optional) – If addOnly is False (the default) all variables in env will be included in the returned dict, otherwise only variables added by rules will be included

Rules applied in the order they are declared in the rules dictionary. The following examples show the different patterns for the rules:

foo : bar

Add foo=bar

+foo

Copy foo from the current environment

+foo : bar

Copy foo, or add foo=bar if it is not present

+!foo*

Copy all name from the current environment except those matching foo*

-!foo

Remove all names except for foo

^foo : /bar/bin

Treat foo like PATH and prepend /bar/bin:$foo

class unfurl.util.sensitive_bytes

Transparent wrapper class to mark bytes as sensitive

decode(*args, **kwargs)

Wrapper method to ensure type conversions maintain sensitive context

class unfurl.util.sensitive_dict

Transparent wrapper class to mark a dict as sensitive

class unfurl.util.sensitive_list(iterable=(), /)

Transparent wrapper class to mark a list as sensitive

class unfurl.util.sensitive_str

Transparent wrapper class to mark a str as sensitive

encode(*args, **kwargs)

Wrapper method to ensure type conversions maintain sensitive context

Eval module

Public Api:

map_value - returns a copy of the given value resolving any embedded queries or template strings

Ref.resolve given an expression, returns a ResultList Ref.resolve_one given an expression, return value, none or a (regular) list Ref.is_ref return true if the given diction looks like a Ref

Internal:

eval_ref() given expression (string or dictionary) return list of Result Expr.resolve() given expression string, return list of Result Results._map_value same as map_value but with lazily evaluation

class unfurl.eval.Ref(exp, vars=None, trace=None)

A Ref objects describes a path to metadata associated with a resource.

resolve(ctx, wantList=True, strict=True)

If wantList=True (default) returns a ResultList of matches Note that values in the list can be a list or None If wantList=False return resolve_one semantics If wantList=’result’ return a Result

resolve_one(ctx, strict=True)

If no match return None If more than one match return a list of matches Otherwise return the match

Note: If you want to distinguish between None values and no match or between single match that is a list and a list of matches use resolve() which always returns a (possible empty) of matches

class unfurl.eval.RefContext(currentResource, vars=None, wantList=False, resolveExternal=False, trace=None, strict=True, task=None)

The context of the expression being evaluated.

unfurl.eval.eval_ref(val, ctx, top=False)

val is assumed to be an expression, evaluate and return a list of Result