- Ansible is a powerful automation engine that makes systems and apps simple to deploy.
- Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
- Every team in your organization will benefit from Ansible Tower. For Network teams, Ansible Tower enables:
Security: Store Network Credentials
Delegation: Using Role-Based Access Control (RBAC)
Power: Leverage the Ansible Tower API
Control: Schedule Jobs for Automated Playbook Runs
Flexibility: Launch Job Templates Using Surveys
Integrations: Leverage Ansible Tower Integrations like Version Control
Compliance: Run Jobs in Check Mode for Audits
https://www.ansible.com/integrations/networks
- AWX provides a web-based user interface, REST API, and task engine built on top of Ansible. It is the upstream project for Tower, a commercial derivative of AWX.
- When Ansible, Inc. was acquired by Red Hat, we told our users that we would open the source code for Ansible Tower. The AWX Project is a fulfillment of that intent.
- What’s the difference between AWX and Ansible Tower?
Ansible Tower is produced by taking selected releases of AWX, hardening them for long-term supportability, and making them available to customers as the Ansible Tower offering.
https://www.ansible.com/products/awx-project/faq
- The result of running the Task was "changed": false. This shows that there were no changes; I had already installed Nginx. I can run this command over and over without worrying about it affecting the desired result.
https://serversforhackers.com/c/an-ansible-tutorial
- Why Ansible?
It’s Python. I like Python. I’ve been using it far longer than any other language.
It’s self-documenting, Simple YAML files describing the playbooks and roles.
It’s feature-rich. Some call this batteries included, but there’s over 150 modules provided out of the box, and new ones are pretty easy to write.
http://tomoconnor.eu/blogish/getting-started-ansible/#.WpUNBudRWUk
- First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on
https://liquidat.wordpress.com/2014/02/17/howto-first-steps-with-ansible/
- Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible greatly expands the number of filters and tests available, as well as adding a new plugin type: lookups.
http://docs.ansible.com/ansible/latest/playbooks_templating.html
- Ansible has two modes of operation for reusable content: dynamic and static.
Ansible pre-processes all static imports during Playbook parsing time.
Dynamic includes are processed during runtime at the point in which that task is encountered.
When it comes to Ansible task options like tags and conditional statements (when:):
For static imports, the parent task options will be copied to all child tasks contained within the import.
For dynamic includes, the task options will only apply to the dynamic task as it is evaluated, and will not be copied to child tasks.
http://docs.ansible.com/ansible/latest/playbooks_reuse.html
- Filters in Ansible are from Jinja2, and are used for transforming data inside a template expression
http://docs.ansible.com/ansible/latest/playbooks_filters.html
- Ansible works by configuring client machines from an computer with Ansible components installed and configured. It communicates over normal SSH channels in order to retrieve information from remote machines, issue commands, and copy files. Because of this, an Ansible system does not require any additional software to be installed on the client computers. This is one way that Ansible simplifies the administration of servers. Any server that has an SSH port exposed can be brought under Ansible's configuration umbrella, regardless of what stage it is at in its life cycle.
Step 1 — Installing Ansible
sudo yum install epel-release
sudo yum install ansible
Step 2 — Configuring Ansible Hosts
sudo vi /etc/ansible/hosts
Ansible will, by default, try to connect to remote hosts using your current username. If that user doesn't exist on the remote system, a connection attempt will result in this error
Let's specifically tell Ansible that it should connect to servers in the "servers" group with the sammy user. Create a directory in the Ansible configuration structure called group_vars.
sudo mkdir /etc/ansible/group_vars
Within this folder, we can create YAML-formatted files for each group we want to configure:
sudo nano /etc/ansible/group_vars/servers
If you want to specify configuration details for every server, regardless of group association, you can put those details in a file at /etc/ansible/group_vars/all. Individual hosts can be configured by creating files under a directory at /etc/ansible/host_vars.
Step 3 — Using Simple Ansible Commands
Ping all of the servers you configured
This is a basic test to make sure that Ansible has a connection to all of its hosts.
ansible -m ping all
You can also specify an individual host:
ansible -m ping host1
ansible -m ping host1:host2
The shell module lets us send a terminal command to the remote host and retrieve the results. For instance, to find out the memory usage on our host1 machine
ansible -m shell -a 'free -m' host1
Usage: ansible <host-pattern> [options]
-m MODULE_NAME, --module-name=MODULE_NAME
https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-centos-7
- Tags
- Tests in Jinja2 are a way of evaluating template expressions and returning True or False
- Lookup plugins allow access of data in Ansible from outside sources. Like all templating, these plugins are evaluated on the Ansible control machine, and can include reading the filesystem but also contacting external datastores and services
- Let's say that we want to change the configuration of nginx. We will use simplest way and we will just replace whole nginx.conf file. Inside template directory create a file and name it for instance nginx.conf.j2 J2 is extension of Jinja2 templating language that Ansible is using.
- Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the template formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/).
http://docs.ansible.com/ansible/latest/template_module.html
- Jinja2 is a modern and designer-friendly templating language for Python, modelled after Django’s templates. It is fast, widely used and secure with the optional sandboxed template execution environment:
sandboxed execution
powerful automatic HTML escaping system for XSS prevention
template inheritance
compiles down to the optimal python code just in time
optional ahead-of-time template compilation
easy to debug. Line numbers of exceptions directly point to the correct line in the template.
configurable syntax
http://jinja.pocoo.org/docs/2.10/
- A Jinja template is simply a text file. Jinja can generate any text-based format (HTML, XML, CSV, LaTeX, etc.). A Jinja template doesn’t need to have a specific extension: .html, .xml, or any other extension is just fine.
http://jinja.pocoo.org/docs/2.10/templates/
- You can point Ansible's configuration (ansible.cfg) to a vault-password-file that is outside of your repository.
https://opensource.com/article/16/12/devops-security-ansible-vault
- A typical use of Ansible Vault is to encrypt variable files
Files within the group_vars directory
A role's defaults/main.yml file
A role's vars/main.yml file
Any other file used to store variables.
https://serversforhackers.com/c/how-ansible-vault-works
- Ansible is a configuration management and provisioning tool, similar to Chef, Puppet or Salt.
Ansible uses "Facts", which is system and environment information it gathers ("context") before running Tasks.
Modules
Ansible uses "modules" to accomplish most of its Tasks. Modules can do things like install software, copy files, use templates and much more.
A Handler is exactly the same as a Task (it can do anything a Task can), but it will only run when called by another Task. You can think of it as part of an Event system; A Handler will take an action when called by an event it listens for.
Roles are good for organizing multiple, related Tasks and encapsulating data needed to accomplish those Tasks.
https://serversforhackers.com/c/an-ansible2-tutorial
- Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier).
- Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks.
With Ansible Container, you no longer have to build and configure containers differently than you do traditional virtual machines or bare-metal systems
You can now apply the power of Ansible and re-use your existing Ansible content for your containerized ecosystem.
http://docs.ansible.com/ansible-container/
- It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.
1. Shell scripts embedded in Dockerfiles.
2. You can't parse Dockerfiles easily.
The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful.
https://opensource.com/article/17/10/dockerfiles-ansible-container
- Getting Ansible Container
Prerequisites:
Python 2.7 or Python 3.5
pip
setuptools 20.0.0+
http://docs.ansible.com/ansible-container/installation.html
- Ansible Container needs to communicate with the docker service through its local socket. The following commands change the socket ownership, and add you to a docker group that can access the socket:
Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks. The application is described in a single YAML file, and instead of using a Dockerfile, lists Ansible roles that make up the container images.
To install it, use the python3 virtual environment module.
Ansible Container provides three engines: Docker, Kubernetes and Openshift.
https://fedoramagazine.org/build-test-applications-ansible-container/
- The main advantage of the Ansible Local provisioner in comparison to the Ansible (remote) provisioner is that it does not require any additional software on your Vagrant host.
install (boolean) - Try to automatically install Ansible on the guest system.This option is enabled by default
https://www.vagrantup.com/docs/provisioning/ansible_local.html
- What’s an ad-hoc command?
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.
http://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html
- If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms
- Galaxy, is a free site for finding, downloading, and sharing community developed roles.
http://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html
- Simply put, roles are a further level of abstraction that can be useful for organizing playbooks. As you add more and more functionality and flexibility to your playbooks, they can become unwieldy and difficult to maintain as a single file. Roles allow you to create very minimal playbooks that then look to a directory structure to determine the actual configuration steps they need to perform.
Organizing things into roles also allows you to reuse common configuration steps between different types of servers. This is already possible by "including" other files within a playbook, but with roles, these types of links between files are automatic based on a specific directory hierarchy.
In general, the idea behind roles is to allow you to define what a server is supposed to do, instead of having to specify the exact steps needed to get a server to act a certain way.
https://www.digitalocean.com/community/tutorials/how-to-use-ansible-roles-to-abstract-your-infrastructure-environment
- Provisioning
Your apps have to live somewhere. Whether you’re PXE-booting and kickstarting bare-metal servers, creating virtual machines (VMs), or deploying public, private, or hybrid cloud instances from templates, Red Hat® Ansible® Automation helps streamline the process.
Configuration management
Centralizing configuration file management and deployment is a common use case
Application deployment
From development to production, playbooks make app installation, upgrades, and management repeatable and reliable
Continuous delivery
Updates without downtime
Orchestrate server configuration in batches—including load balancing, monitoring, and the availability of network or cloud services—to roll changes across your environments without disrupting business.
Security & compliance
When you define your security policy in Ansible, scanning, and remediation sitewide can be integrated into other automated processes
Scan jobs and system tracking help you immediately see any systems that deviate from defined parameters.
https://www.redhat.com/en/technologies/management/ansible/use-cases
- Out-of-the-box, Travis-CI doesn’t support CentOS, as its test environment is Ubuntu-based. However, Travis-CI allows you to set up a Docker container and this opens up all kinds of possibilities.
It is mainly used for running tests on applications, but it has seen used for infrastructure testing as well.
Travis-CI: it’s free for open source projects and it integrates nicely with Github so that on every push and submitted pull request a test run is triggered
During a test run, a VM is booted and a script called .travis.yml is executed. This contains the necessary steps to configure the system, install dependencies and run the actual test code.
http://bertvv.github.io/notes-to-self/2015/12/11/testing-ansible-roles-with-travis-ci-part-1-centos/
- Saltstack
Automation for enterprise IT ops, event-driven data center orchestration and the most flexible configuration management for DevOps at scale
https://saltstack.com
Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities.
This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.
The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems.
On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States
https://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html
- Terminology
Master
This is the master instance which connects to all servers added to your SaltStack "cluster", thus also running any commands / communication to your servers.
Minion
The servers which are added to your SaltStack are called minions. Any actions are either performed on one, a group, or all of your minions.
Formula
A formula represents a file or a set of files which introduces the minions which commands that should be performed. This can be the installation of a single application such as nginx or rolling out configuration files, etc.
Pillar
A pillar is a file which stores information related to a group of minions or a single minion. As an example, you would use this sort of file for storing the "Virtual-Hosts" for Nginx for a particular minion.
SaltStack is based on Python, you can easily add your own modules too, if you are fluent with the language.
https://www.vultr.com/docs/getting-started-with-saltstack-on-ubuntu-17-04
- salt-ssh and salt-cloud packages for resource control and for documentations we will install salt-doc package
- You can use Salt agentless to run Salt commands on a system without installing a Salt minion. The only requirements on the remote system are SSH and Python.
When running in agentless mode, Salt:
Connects to the remote system over SSH.
Deploys a thin version of Salt to a temp directory, including any required files.
Runs the specified command(s).
(Optional) Cleans up the temp directory.
You can use Salt agentless in conjunction with a master-minion environment, or you can manage all of your systems agentless.
SaltStack is a revolutionary approach to infrastructure management that replaces complexity with speed. SaltStack is simple enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with each system in seconds.
https://docs.saltstack.com/en/getstarted/ssh/index.html
- Saltstack, a strong configuration management written in Python and using ZeroMQ for dial with servers (called minions).
- We need to inform him that it will use the standalone mode (and not the master-client mode). For this, edit the
/etc/salt/minion file :
file_client: local
https://opsnotice.xyz/docker-with-saltstack/
- Standalone Minion
Use salt-call commands on a system without connectivity to a master
Masterless States, run states entirely from files local to the minion
When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail.
https://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html#tutorial-standalone-minion
- Salt Proxy Minion
Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html
- Network Automation
NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an open source Python library that implements a set of functions to interact with different router vendor devices using a unified API. Begin vendor-agnostic simplifies the operations, as the configuration and the interaction with the network device does not rely on a particular vendor
https://docs.saltstack.com/en/latest/topics/network_automation/index.html
- Salt SSH
Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt command
https://docs.saltstack.com/en/latest/topics/ssh/index.html
- For example, if you want to set up a load balancer in front of a cluster of web servers you can ensure the load balancer is set up first, and then the same matching configuration is applied consistently across the whole cluster.
https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html
- Beacons
https://docs.saltstack.com/en/latest/topics/beacons/index.html
- Reactor System
https://docs.saltstack.com/en/latest/topics/reactor/index.html
- Event System
https://docs.saltstack.com/en/latest/topics/event/events.html
- Configuration Management
https://docs.saltstack.com/en/latest/topics/states/index.html
- States
A full list of states
Contains: list of install packages, create users, transfer files, start services, and so on.
Pillar System
Contains: description of Salt's Pillar system.
Highstate data structure
Contains: a dry vocabulary and technical representation of the configuration format that states represent.
Writing states
Contains: a guide on how to write Salt state modules, easily extending Salt to directly manage more software.
Renderers
Renderers use state configuration files written in a variety of languages, templating engines, or files. Salt's configuration management system is, under the hood, language agnostic
https://docs.saltstack.com/en/latest/topics/states/index.html
- Storing Static Data in the Pillar
https://docs.saltstack.com/en/latest/topics/pillar/index.html#pillar
- The Salt State Tree
The main state file that instructs minions what environment and modules to use during state execution.
https://docs.saltstack.com/en/latest/ref/states/highstate.html#states-highstate
- Grains
https://docs.saltstack.com/en/latest/topics/grains/index.html
- Understanding Jinja
https://docs.saltstack.com/en/latest/topics/jinja/index.html
- Setting up the Salt State Tree
Running the Salt States and Commands in Docker Containers
Salt introduces the ability to execute the Salt States and Salt remote execution commands directly inside of Docker containers.
This addition makes it possible to not only deploy fresh containers using the Salt States.
This also allows for running containers to be audited and modified using Salt, but without running a Salt Minion inside the container
Some of the applications include security audits of running containers as well as gathering operating data from containers.
This new feature is simple and straightforward and can be used via a running Salt Minion, the Salt Call command, or via Salt SSH
https://docs.saltstack.com/en/latest/topics/tutorials/docker_sls.html#docker-sls
- Both Salt and Ansible are originally built as execution engines. That is, they allow executing commands on one or more remote systems, in parallel if you want.
A playbook can vary the hosts' modules are executed on. This makes it possible to orchestrate multiple machines, such as take them out of load balancers before upgrading an application.
Salt has two types of modules; execution modules and state modules. Execution modules are modules simply executes something, it could be a command line execution, or downloading a file. A state module is more like an Ansible module, where the arguments define a state and the module tries to fulfill that end state. In general, state modules are using execution modules to most of their work
The state module also supports defining states in files, called SLS files. Which states to apply to which hosts is defined in a top.sls file.
Both playbooks and SLS files (usually) are written in YAML.
Salt is built around a Salt master and multiple Salt minions that are connecting to the master when they boot. Generally, commands are issued on the master command line. The master then dispatches those commands out to minions.
Initially, minions initiate a handshake consisting of a cryptographic key exchange and after that, they have a persistent encrypted TCP connection
The minions also cache various data to make execution faster.
Ansible is masterless and it uses SSH as its primary communication layer
This means it is slower, but being masterless might make it slightly easier to run setup and test Ansible playbooks
Salt also supports using SSH instead of ZeroMQ using Salt SSH.
Ansible is always using SSH for initiating connections. This is slow. Its ZeroMQ implementation (mentioned earlier) does help, but initialization is still slow. Salt uses ZeroMQ by default, and it is _fast_.
While talking about testing... DevOps people loves Vagrant. Until recently I had not worked with it. Vagrant comes with provisioning modules both for Salt and Ansible. This makes it a breeze to get up and running with a master+minion in Vagrant, or executing a playbook on startup.
Salt can run in masterless mode. This makes it easier to get it up and running. However, for production (and stability) I recommend getting an actual master up and running.
To me, Ansible was a great introduction to automated server configuration and deployment. It was easy to get up and running and has great documentation.
Moving forward, the scalability, speed and architecture of Salt has it going for it. For cloud deployments I find the Salt architecture to be a better fit.
http://jensrantil.github.io/salt-vs-ansible.html
- Puppet
Puppet Enterprise provides a unified approach to automation. With a single solution, you can manage heterogeneous environments—physical, virtual, or cloud—and automate the management of computing, storage, and network resources. Here are some of the integrations we provide to help you automate all aspects of your infrastructure.
http://puppetlabs.com/solutions
- Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.
https://puppetlabs.com/puppet/what-is-puppet/
- Chef
Chef is built to address the hardest infrastructure challenges on the planet. By modeling IT infrastructure and application delivery as code, Chef provides the power and flexibility to compete in the digital economy.
http://www.opscode.com/chef/
- Flywaydb
Evolve your Database Schema easily and reliably across all your instances
https://flywaydb.org
https://cfengine.com/
https://flywaydb.org
- CFEngine
https://cfengine.com/
Great collection .Keep updating Devops Online Course
ReplyDeletewell! Thanks for providing a good stuff
ReplyDeleteAzure DevOps online training
Microsoft Azure DevOps Online Training