Continuous Improvements in Ansible and Kubernetes Automation


Ansible is an ideal tool for managing many different types of Kubernetes resources. There are four key features that really help:

  • Modules and plugins for creating, updating, removing and obtaining information about Kubernetes resources
  • Templating of Kubernetes resource definitions
  • Powerful inventory system
  • Secrets management 

Together these combine to help enable repeatable deployment and management of applications and multiple Kubernetes clusters in a single role for every resource.

Since the last blog post on Kubernetes features for Ansible Engine 2.6, there have been a number of improvements to Ansible’s Kubernetes capabilities. Let’s go over some of the improvements to the modules and libraries and other new features that have been added in the last year, and also highlight what is in the works.

New features

Better change management through apply mode

The k8s module now accepts an apply parameter, which approximates the behavior of kubectl apply. When apply is set to True, the k8s module will store the last applied configuration in an annotation on the object. When the object already exists, instead of just sending the new manifest to the API server, the module will now do a 3-way merge, combining the existing cluster state, the last-applied configuration, and the new requested configuration. This will allow the module to detect intentional deletions, without detecting changes due to server-side defaulting, and without overriding non-conflicting changes made to the resource. This option is available in Ansible Engine 2.9.

Immutable configuration through append_hash

The k8s module now accepts an append_hash parameter, which is only used when adding and updating ConfigMaps and Secrets.

When the append_hash parameter is set, the module takes a hash of the ConfigMap or Secret, and appends that hash to the resource’s name. By ensuring that the names of these resources change when their contents change, and updating the Deployment to reference the new ConfigMap or Secret, a change of config can now be used to enforce a change to a Deployment. This eliminates the problem of a ConfigMap or Secret being updated, but Pods not picking up the new configuration.

A helper plugin, k8s_config_resource_name can take a resource definition and output the hashed name.

For further information, there’s an AnsibleFest 2018 video and an accompanying blog post showing how these processes work.

Waiting for resources

The k8s module now accepts the wait, wait_timeout, and wait_condition parameters. Using wait allows you to fail resource updates if a resource doesn’t meet expected state before wait_timeout (defaults to 120, i.e. two minutes). There are default wait conditions for Deployments, Pods and DaemonSets, and you can also pass your own wait_condition parameter to wait for arbitrary conditions on any resource.


If you want to wait for a resource other than Deployments, Pods, and Daemonsets, or you want to wait for conditions other than readiness, you can specify your own conditions to wait for with the wait_condition parameter. This will cause the task to wait until the condition you specify is present in the status.conditions of the resource you are interacting with. The conditions you can wait for vary significantly on a per-resource basis, not all resources implement conditions, and wait_condition blocks will almost certainly not be portable across resources. For these reasons this feature should be considered advanced, and requires familiarity with the schema of the Kubernetes resource you are interacting with. The following example will pause a Deployment and wait for the Deployment to report that operation has completed:

– name: pause a deployment



      apiVersion: apps/v1

      kind: Deployment


        name: example

        namespace: testing


        paused: True

    wait: yes


      type: Progressing

      status: Unknown

      reason: DeploymentPaused

Improved client-side validation with kubernetes-validate

Will Thames has published the kubernetes-validate python library to validate Kubernetes resources. The library is heavily based on work by Gareth Rushgrove to provide JSON Schema validation of Kubernetes schemas.

This library allows the Ansible k8s module to validate resource definitions against the schema and warn or fail if problems are detected. The main options are fail_on_error, which must be set if using validation, and strict – which flags unexpected parameters.


  definition: …


    fail_on_error: yes

    strict: yes

You can also pass version as an option to validation – it defaults to the version of the Kubernetes service you’re talking to – this might be useful to ensure your definitions pass against a future version of Kubernetes.

k8s_info, k8s_auth, k8s_service Modules 

k8s_info was added in version 2.7 as k8s_facts but renamed in version 2.9 due to a newly accepted module naming convention. k8s_info provides Kubernetes resource information. It’s an alternative to the k8s lookup plugin, and runs on the target host rather than the controller host. (Admittedly, these are often the same hosts.)

k8s_auth was introduced in version 2.8 and provides the ability to authenticate to Kubernetes clusters that require explicit authentication procedures, ie, where the client obtains a token, performs API operations with that token, and then revokes that token. An example of a Kubernetes distribution requiring this module is OpenShift, which from the oc utility supports logging in with a username and password, but until now required an already valid token in order to interact with your cluster via Ansible.

k8s_service was also introduced in version 2.8 and is a higher level interface to the v1.Service resource. It is still possible to use the k8s module and provide a definition in order to interact with these services, but this module provides a more Ansible-native interface and is able to more intelligently handle updates.

Ansible Module

Introduced in Ansible Engine









** – k8s_facts is now an alias to the k8s_info module beginning with Ansible Engine 2.9.

Custom Resource Definition support

Version 2.7 of Ansible Engine saw the addition of the merge_type parameter, while also making the merge_type parameter largely unnecessary. Previously merge_type defaulted to strategic-merge, which was a problem for Custom Resource Definitions which can’t use strategic merges since the merge strategy is only defined for built-in resources. The initial fix was to allow merge_type to be set to merge, but now the default is a list: [‘strategic-merge’, ‘merge’], which means it tries strategic-merge, and if that fails, tries merge. The parameter is still overridable for other situations, including if you wish to use the json merge type.

kube-resource role

Will Thames has published a kube-resource role on github which includes examples of many of the features mentioned here and is a working implementation of the ideas presented at AnsibleFest 2018.

There are tag bases that correspond to Ansible Engine versions 2.7, 2.8 and 2.9. Use the latest of the tag bases for the version of Ansible you’re using (the v2.7 tag base contains modules and module_utils that are not needed with Ansible Engine versions 2.8 or 2.9 as those fixes are included).

About the authors

Will Thames is an Infrastructure Engineer at Skedulo and has been an Ansible contributor since 2012. Fabian von Feilitzsch is a Senior Software Engineer at Red Hat focusing on automating Kubernetes with Ansible.

Originally posted on Ansible Blog

Deja una respuesta

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *