Thursday, March 1, 2018

Configuration Management

  • Ansible is a powerful automation engine that makes systems and apps simple to deploy.
http://www.ansible.com

  • Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
http://docs.ansible.com/ansible/latest/index.html
  • Every team in your organization will benefit from Ansible Tower. For Network teams, Ansible Tower enables:

    Security: Store Network Credentials
    Delegation: Using Role-Based Access Control (RBAC)
    Power: Leverage the Ansible Tower API
    Control: Schedule Jobs for Automated Playbook Runs
    Flexibility: Launch Job Templates Using Surveys
    Integrations: Leverage Ansible Tower Integrations like Version Control
    Compliance: Run Jobs in Check Mode for Audits
https://www.ansible.com/integrations/networks
  • AWX provides a web-based user interface, REST API, and task engine built on top of Ansible. It is the upstream project for Tower, a commercial derivative of AWX.
https://github.com/ansible/awx

  • When Ansible, Inc. was acquired by Red Hat, we told our users that we would open the source code for Ansible Tower. The AWX Project is a fulfillment of that intent.
https://www.ansible.com/products/awx-project/faq


  • What’s the difference between AWX and Ansible Tower?
AWX is designed to be a frequently released, fast-moving project where all new development happens.
Ansible Tower is produced by taking selected releases of AWX, hardening them for long-term supportability, and making them available to customers as the Ansible Tower offering.
https://www.ansible.com/products/awx-project/faq
  • The result of running the Task was "changed": false. This shows that there were no changes; I had already installed Nginx. I can run this command over and over without worrying about it affecting the desired result.
From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request. Note that while idempotent operations produce the same result on the server (no side effects), the response itself may not be the same (e.g. a resource's state may change between requests)
https://serversforhackers.com/c/an-ansible-tutorial

  • Why Ansible?
It’s agentless.  Unlike Puppet, Chef, Salt, etc.. Ansible operates only over SSH (or optionally ZeroMQ), so there’s none of that crap PKI that you have to deal with using Puppet.
It’s Python. I like Python.  I’ve been using it far longer than any other language.
It’s self-documenting,  Simple YAML files describing the playbooks and roles.
It’s feature-rich.  Some call this batteries included, but there’s over 150 modules provided out of the box, and new ones are pretty easy to write.
http://tomoconnor.eu/blogish/getting-started-ansible/#.WpUNBudRWUk

  • First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on

https://liquidat.wordpress.com/2014/02/17/howto-first-steps-with-ansible/

  • Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible greatly expands the number of filters and tests available, as well as adding a new plugin type: lookups.
Please note that all templating happens on the Ansible controller before the task is sent and executed on the target machine.
http://docs.ansible.com/ansible/latest/playbooks_templating.html


  • Ansible has two modes of operation for reusable content: dynamic and static.

    Ansible pre-processes all static imports during Playbook parsing time.
    Dynamic includes are processed during runtime at the point in which that task is encountered.

When it comes to Ansible task options like tags and conditional statements (when:):

    For static imports, the parent task options will be copied to all child tasks contained within the import.
    For dynamic includes, the task options will only apply to the dynamic task as it is evaluated, and will not be copied to child tasks.


http://docs.ansible.com/ansible/latest/playbooks_reuse.html


  • Filters in Ansible are from Jinja2, and are used for transforming data inside a template expression
Take into account that templating happens on the Ansible controller, not on the task’s target host, so filters also execute on the controller as they manipulate local data.

http://docs.ansible.com/ansible/latest/playbooks_filters.html

  • Ansible works by configuring client machines from an computer with Ansible components installed and configured. It communicates over normal SSH channels in order to retrieve information from remote machines, issue commands, and copy files. Because of this, an Ansible system does not require any additional software to be installed on the client computers. This is one way that Ansible simplifies the administration of servers. Any server that has an SSH port exposed can be brought under Ansible's configuration umbrella, regardless of what stage it is at in its life cycle.
Ansible for CentOS 7

Step 1 — Installing Ansible
sudo yum install epel-release
sudo yum install ansible

Step 2 — Configuring Ansible Hosts
sudo vi /etc/ansible/hosts

Ansible will, by default, try to connect to remote hosts using your current username. If that user doesn't exist on the remote system, a connection attempt will result in this error

Let's specifically tell Ansible that it should connect to servers in the "servers" group with the sammy user. Create a directory in the Ansible configuration structure called group_vars.
sudo mkdir /etc/ansible/group_vars

Within this folder, we can create YAML-formatted files for each group we want to configure:
sudo nano /etc/ansible/group_vars/servers

If you want to specify configuration details for every server, regardless of group association, you can put those details in a file at /etc/ansible/group_vars/all. Individual hosts can be configured by creating files under a directory at /etc/ansible/host_vars.


Step 3 — Using Simple Ansible Commands
Ping all of the servers you configured
This is a basic test to make sure that Ansible has a connection to all of its hosts.

ansible -m ping all

You can also specify an individual host:
ansible -m ping host1
ansible -m ping host1:host2

The shell module lets us send a terminal command to the remote host and retrieve the results. For instance, to find out the memory usage on our host1 machine
ansible -m shell -a 'free -m' host1

Usage: ansible <host-pattern> [options]
-m MODULE_NAME, --module-name=MODULE_NAME

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-centos-7

  • Tags
If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook.
http://docs.ansible.com/ansible/latest/playbooks_tags.html

  • Tests in Jinja2 are a way of evaluating template expressions and returning True or False
http://docs.ansible.com/ansible/latest/playbooks_tests.html

  • Lookup plugins allow access of data in Ansible from outside sources. Like all templating, these plugins are evaluated on the Ansible control machine, and can include reading the filesystem but also contacting external datastores and services
http://docs.ansible.com/ansible/latest/playbooks_lookups.html

  • Let's say that we want to change the configuration of nginx. We will use simplest way and we will just replace whole nginx.conf file. Inside template directory create a file and name it for instance nginx.conf.j2  J2 is extension of Jinja2 templating language that Ansible is using.
https://blacksaildivision.com/ansible-tutorial-part-3

  • Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the template formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/).

http://docs.ansible.com/ansible/latest/template_module.html

  • Jinja2 is a modern and designer-friendly templating language for Python, modelled after Django’s templates. It is fast, widely used and secure with the optional sandboxed template execution environment:
Features:
    sandboxed execution
    powerful automatic HTML escaping system for XSS prevention
    template inheritance
    compiles down to the optimal python code just in time
    optional ahead-of-time template compilation
    easy to debug. Line numbers of exceptions directly point to the correct line in the template.
    configurable syntax
http://jinja.pocoo.org/docs/2.10/

  • A Jinja template is simply a text file. Jinja can generate any text-based format (HTML, XML, CSV, LaTeX, etc.). A Jinja template doesn’t need to have a specific extension: .html, .xml, or any other extension is just fine.
 The template syntax is heavily inspired by Django and Python.
http://jinja.pocoo.org/docs/2.10/templates/

  • You can point Ansible's configuration (ansible.cfg) to a vault-password-file that is outside of your repository.
vault_password_file = ~/.ansible_vault
https://opensource.com/article/16/12/devops-security-ansible-vault


  • A typical use of Ansible Vault is to encrypt variable files

    Files within the group_vars directory
    A role's defaults/main.yml file
    A role's vars/main.yml file
    Any other file used to store variables.
https://serversforhackers.com/c/how-ansible-vault-works



  • Ansible is a configuration management and provisioning tool, similar to Chef, Puppet or Salt.

Ansible uses "Facts", which is system and environment information it gathers ("context") before running Tasks.

Modules
Ansible uses "modules" to accomplish most of its Tasks. Modules can do things like install software, copy files, use templates and much more.

A Handler is exactly the same as a Task (it can do anything a Task can), but it will only run when called by another Task. You can think of it as part of an Event system; A Handler will take an action when called by an event it listens for.

Roles are good for organizing multiple, related Tasks and encapsulating data needed to accomplish those Tasks.

https://serversforhackers.com/c/an-ansible2-tutorial

  • Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier).
http://docs.ansible.com/ansible/latest/intro_configuration.html#ssh-args

  • Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks.
Describe your application in a single YAML file and, rather than using a Dockerfile
With Ansible Container, you no longer have to build and configure containers differently than you do traditional virtual machines or bare-metal systems
You can now apply the power of Ansible and re-use your existing Ansible content for your containerized ecosystem.
http://docs.ansible.com/ansible-container/


  • It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

1. Shell scripts embedded in Dockerfiles.
2. You can't parse Dockerfiles easily.
The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful.
https://opensource.com/article/17/10/dockerfiles-ansible-container


  • Getting Ansible Container

Prerequisites:

    Python 2.7 or Python 3.5
    pip
    setuptools 20.0.0+
http://docs.ansible.com/ansible-container/installation.html

  • Ansible Container needs to communicate with the docker service through its local socket. The following commands change the socket ownership, and add you to a docker group that can access the socket:

Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks. The application is described in a single YAML file, and instead of using a Dockerfile, lists Ansible roles that make up the container images.
To install it, use the python3 virtual environment module.
Ansible Container provides three engines: Docker, Kubernetes and Openshift.
https://fedoramagazine.org/build-test-applications-ansible-container/

  • The main advantage of the Ansible Local provisioner in comparison to the Ansible (remote) provisioner is that it does not require any additional software on your Vagrant host.
On the other hand, Ansible must obviously be installed on your guest machine(s).

install (boolean) - Try to automatically install Ansible on the guest system.This option is enabled by default
https://www.vagrantup.com/docs/provisioning/ansible_local.html

  • What’s an ad-hoc command?
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.
http://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html

  • If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html


  • Galaxy, is a free site for finding, downloading, and sharing community developed roles.
Downloading roles from Galaxy is a great way to jumpstart your automation projects.
http://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html


  • Simply put, roles are a further level of abstraction that can be useful for organizing playbooks. As you add more and more functionality and flexibility to your playbooks, they can become unwieldy and difficult to maintain as a single file. Roles allow you to create very minimal playbooks that then look to a directory structure to determine the actual configuration steps they need to perform.


Organizing things into roles also allows you to reuse common configuration steps between different types of servers. This is already possible by "including" other files within a playbook, but with roles, these types of links between files are automatic based on a specific directory hierarchy.

In general, the idea behind roles is to allow you to define what a server is supposed to do, instead of having to specify the exact steps needed to get a server to act a certain way.
https://www.digitalocean.com/community/tutorials/how-to-use-ansible-roles-to-abstract-your-infrastructure-environment

  • Provisioning

Your apps have to live somewhere. Whether you’re PXE-booting and kickstarting bare-metal servers, creating virtual machines (VMs), or deploying public, private, or hybrid cloud instances from templates, Red Hat® Ansible® Automation helps streamline the process.

Configuration management
Centralizing configuration file management and deployment is a common use case

Application deployment
From development to production, playbooks make app installation, upgrades, and management repeatable and reliable

Continuous delivery
Updates without downtime
Orchestrate server configuration in batches—including load balancing, monitoring, and the availability of network or cloud services—to roll changes across your environments without disrupting business.

Security & compliance
When you define your security policy in Ansible, scanning, and remediation sitewide can be integrated into other automated processes
Scan jobs and system tracking help you immediately see any systems that deviate from defined parameters.
https://www.redhat.com/en/technologies/management/ansible/use-cases


  • Out-of-the-box, Travis-CI doesn’t support CentOS, as its test environment is Ubuntu-based. However, Travis-CI allows you to set up a Docker container and this opens up all kinds of possibilities.

It is mainly used for running tests on applications, but it has seen used for infrastructure testing as well.
Travis-CI: it’s free for open source projects and it integrates nicely with Github so that on every push and submitted pull request a test run is triggered
During a test run, a VM is booted and a script called .travis.yml is executed. This contains the necessary steps to configure the system, install dependencies and run the actual test code.


http://bertvv.github.io/notes-to-self/2015/12/11/testing-ansible-roles-with-travis-ci-part-1-centos/

  • Saltstack

Automation for enterprise IT ops, event-driven data center orchestration and the most flexible configuration management for DevOps at scale
https://saltstack.com

Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities.
This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.
The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems.
On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States
https://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html

  • Terminology
SaltStack uses a few keywords which represent a particular device or configuration, as explained below:
Master
This is the master instance which connects to all servers added to your SaltStack "cluster", thus also running any commands / communication to your servers.
Minion
The servers which are added to your SaltStack are called minions. Any actions are either performed on one, a group, or all of your minions.
Formula
A formula represents a file or a set of files which introduces the minions which commands that should be performed. This can be the installation of a single application such as nginx or rolling out configuration files, etc.
Pillar
A pillar is a file which stores information related to a group of minions or a single minion. As an example, you would use this sort of file for storing the "Virtual-Hosts" for Nginx for a particular minion.

SaltStack is based on Python, you can easily add your own modules too, if you are fluent with the language.
https://www.vultr.com/docs/getting-started-with-saltstack-on-ubuntu-17-04

  • salt-ssh and salt-cloud packages for resource control and for documentations we will install salt-doc package
https://www.unixmen.com/install-and-configure-saltstack-server-in-ubuntu-14-04-x64/

  • You can use Salt agentless to run Salt commands on a system without installing a Salt minion. The only requirements on the remote system are SSH and Python.

When running in agentless mode, Salt:

    Connects to the remote system over SSH.
    Deploys a thin version of Salt to a temp directory, including any required files.
    Runs the specified command(s).
    (Optional) Cleans up the temp directory.

You can use Salt agentless in conjunction with a master-minion environment, or you can manage all of your systems agentless.
SaltStack is a revolutionary approach to infrastructure management that replaces complexity with speed. SaltStack is simple enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with each system in seconds.
https://docs.saltstack.com/en/getstarted/ssh/index.html

  • Saltstack, a strong configuration management written in Python and using ZeroMQ for dial with servers (called minions).
https://opsnotice.xyz/docker-with-saltstack/


  • We need to inform him that it will use the standalone mode (and not the master-client mode). For this, edit the

/etc/salt/minion file :
file_client: local
https://opsnotice.xyz/docker-with-saltstack/

  • Standalone Minion
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
    Use salt-call commands on a system without connectivity to a master
    Masterless States, run states entirely from files local to the minion
    When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail.
https://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html#tutorial-standalone-minion

  • Salt Proxy Minion

Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html

  • Network Automation
For these reasons, most network devices can be controlled only remotely via proxy minions or using the Salt SSH. However, there are also vendors producing white-box equipment (e.g. Arista, Cumulus) or others that have moved the operating system in the container (e.g. Cisco NX-OS, Cisco IOS-XR), allowing the salt-minion to be installed directly on the platform.

NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an open source Python library that implements a set of functions to interact with different router vendor devices using a unified API. Begin vendor-agnostic simplifies the operations, as the configuration and the interaction with the network device does not rely on a particular vendor
https://docs.saltstack.com/en/latest/topics/network_automation/index.html


  • Salt SSH
Execute salt commands and states over ssh without installing a salt-minion.
Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt command
https://docs.saltstack.com/en/latest/topics/ssh/index.html

  • For example, if you want to set up a load balancer in front of a cluster of web servers you can ensure the load balancer is set up first, and then the same matching configuration is applied consistently across the whole cluster.
The orchestration is the way to do this.
https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html

  • Beacons
Beacons let you use the Salt event system to monitor non-Salt processes. The beacon system allows the minion to hook into a variety of system processes and continually monitor these processes. When monitored activity occurs in a system process, an event is sent on the Salt event bus that can be used to trigger a reactor.
https://docs.saltstack.com/en/latest/topics/beacons/index.html

  • Reactor System
Salt's Reactor system gives Salt the ability to trigger actions in response to an event. It is a simple interface to watching Salt's event bus for event tags that match a given pattern and then running one or more commands in response.
https://docs.saltstack.com/en/latest/topics/reactor/index.html

  • Event System
The Salt Event System is used to fire off events enabling third-party applications or external processes to react to behavior within Salt. The event system uses a publish-subscribe pattern, otherwise, know as pub/sub.
https://docs.saltstack.com/en/latest/topics/event/events.html

  • Configuration Management
Salt contains a robust and flexible configuration management framework, which is built on the remote execution core.
https://docs.saltstack.com/en/latest/topics/states/index.html

  • States
    Express the state of a host using small, easy to read, easy to understand configuration files. No programming required.
    A full list of states
        Contains: list of install packages, create users, transfer files, start services, and so on.
    Pillar System
        Contains: description of Salt's Pillar system.
    Highstate data structure
        Contains: a dry vocabulary and technical representation of the configuration format that states represent.
    Writing states
        Contains: a guide on how to write Salt state modules, easily extending Salt to directly manage more software.

Renderers

    Renderers use state configuration files written in a variety of languages, templating engines, or files. Salt's configuration management system is, under the hood, language agnostic
https://docs.saltstack.com/en/latest/topics/states/index.html


  • Storing Static Data in the Pillar
Pillar is an interface for Salt designed to offer global values that can be distributed to minions. Pillar data is managed in a similar way as the Salt State Tree.
https://docs.saltstack.com/en/latest/topics/pillar/index.html#pillar

  • The Salt State Tree
A state tree is a collection of SLS files and directories that live under the directory specified in file_roots.
The main state file that instructs minions what environment and modules to use during state execution.
https://docs.saltstack.com/en/latest/ref/states/highstate.html#states-highstate

  • Grains
Salt comes with an interface to derive information about the underlying system. This is called the grains interface because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties.
https://docs.saltstack.com/en/latest/topics/grains/index.html

  • Understanding Jinja
Jinja is the default templating language in SLS files.
https://docs.saltstack.com/en/latest/topics/jinja/index.html

  • Setting up the Salt State Tree
States are stored in text files on the master and transferred to the minions on demand via the master's File Server.
Running the Salt States and Commands in Docker Containers
Salt introduces the ability to execute the Salt States and Salt remote execution commands directly inside of Docker containers.
This addition makes it possible to not only deploy fresh containers using the Salt States.
This also allows for running containers to be audited and modified using Salt, but without running a Salt Minion inside the container
Some of the applications include security audits of running containers as well as gathering operating data from containers.
This new feature is simple and straightforward and can be used via a running Salt Minion, the Salt Call command, or via Salt SSH
https://docs.saltstack.com/en/latest/topics/tutorials/docker_sls.html#docker-sls


  • Both Salt and Ansible are originally built as execution engines. That is, they allow executing commands on one or more remote systems, in parallel if you want.
Ansible supports executing arbitrary command line commands on multiple machines. It also supports executing modules. An Ansible module is basically a Python module written in a certain Ansible friendly way. Most standard Ansible modules are idempotent. This means you tell them the state you'd want your system to be in, and the module tries to make the system look like that.
A playbook can vary the hosts' modules are executed on. This makes it possible to orchestrate multiple machines, such as take them out of load balancers before upgrading an application.


Salt has two types of modules; execution modules and state modules. Execution modules are modules simply executes something, it could be a command line execution, or downloading a file. A state module is more like an Ansible module, where the arguments define a state and the module tries to fulfill that end state. In general, state modules are using execution modules to most of their work

The state module also supports defining states in files, called SLS files. Which states to apply to which hosts is defined in a top.sls file.

Both playbooks and SLS files (usually) are written in YAML.

Salt is built around a Salt master and multiple Salt minions that are connecting to the master when they boot. Generally, commands are issued on the master command line. The master then dispatches those commands out to minions.
Initially, minions initiate a handshake consisting of a cryptographic key exchange and after that, they have a persistent encrypted TCP connection
The minions also cache various data to make execution faster.

Ansible is masterless and it uses SSH as its primary communication layer
This means it is slower, but being masterless might make it slightly easier to run setup and test Ansible playbooks

Salt also supports using SSH instead of ZeroMQ using Salt SSH.

Ansible is always using SSH for initiating connections. This is slow. Its ZeroMQ implementation (mentioned earlier) does help, but initialization is still slow. Salt uses ZeroMQ by default, and it is _fast_.

While talking about testing... DevOps people loves Vagrant. Until recently I had not worked with it. Vagrant comes with provisioning modules both for Salt and Ansible. This makes it a breeze to get up and running with a master+minion in Vagrant, or executing a playbook on startup.

Salt can run in masterless mode. This makes it easier to get it up and running. However, for production (and stability) I recommend getting an actual master up and running.
To me, Ansible was a great introduction to automated server configuration and deployment. It was easy to get up and running and has great documentation.
Moving forward, the scalability, speed and architecture of Salt has it going for it. For cloud deployments I find the Salt architecture to be a better fit.

http://jensrantil.github.io/salt-vs-ansible.html
  • Puppet

Puppet Enterprise provides a unified approach to automation. With a single solution, you can manage heterogeneous environments—physical, virtual, or cloud—and automate the management of computing, storage, and network resources. Here are some of the integrations we provide to help you automate all aspects of your infrastructure.
http://puppetlabs.com/solutions


  • Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.

https://puppetlabs.com/puppet/what-is-puppet/

  • Chef

Chef is built to address the hardest infrastructure challenges on the planet. By modeling IT infrastructure and application delivery as code, Chef provides the power and flexibility to compete in the digital economy.
http://www.opscode.com/chef/


  • Flywaydb
Evolve your Database Schema easily and reliably across all your instances
https://flywaydb.org

  • CFEngine
Automate large-scale, complex and mission critical IT infrastructure
https://cfengine.com/

2 comments: