Thursday, March 1, 2018

linux releases

  • Management of releases
Support length
    Regular releases are supported for 9 months.
    Long term support (LTS) releases are for 5 years.
    Older releases may have different support length.
https://wiki.ubuntu.com/Releases
  • same as Arch Linux, Gentoo is an Open Source meta-distribution build from sources, based on Linux Kernel, embracing the same rolling release model, aimed for speed and complete customizable for different hardware architectures which compiles software sources locally for best performance using an advanced package management – Portage.
https://www.tecmint.com/gentoo-linux-installation-guide/

  • How do I get updates?
SL publishes updates via two yum repositories, ‘fastbugs’ and security’.
https://www.scientificlinux.org/documentation/faq/faq-updates/

  • What are the ‘security’ and fastbugs’ repos?
The ‘security’ yum repo contains the packages necessary to mitigate any resolved security issues. Some ‘non-security’ packages may be published into the security repo if they are required for dependency resolution. This repo also contains the latest ‘tzdata’ and ‘selinux-policy’ to ensure fixes to these package help protect your system security. This repo is enabled by default.
The fastbugs’ repo contains package updates which are not security-related (bugfixes and enhancements).
https://www.scientificlinux.org/documentation/faq/faq-updates/


  • Then Red Hat Inc. did something amazing. They published the entire source of the distribution for anyone to download, review, or rebuild. They were under no obligation to give this code to non-customers. For components under BSD or MIT licences they were not under obligation to provide this code at all. The significance of this action cannot be understated.
At HEPiX 2003, Connie Sieh, from Fermilab, announced the Scientific Linux project. Later that year CERN joined Scientific Linux and sponsored the Itanium build. Right away, Scientific Linux started providing solutions to problems faced by the whole research community.

In 2014, Red Hat Inc. directly embraced the rebuild community by acquiring the CentOS project
https://www.scientificlinux.org/about/why-make-scientific-linux

  • So what is Scientific Linux good for? Desktop? Server? Laptop? High-demand server? Yes to all. It is a faithful copy of RHEL, plus some useful additions of its own.
https://www.linux.com/learn/scientific-linux-great-distro-wrong-name

  • Scientific Linux is a rebuild of Red Hat Enterprise Linux (property of Red Hat Inc NYSE:RHT).  We informally call them “The Upstream Vendor” or “TUV”.  Our references to TUV are intended to make it clear that Scientific Linux is in no way affiliated, supported, or sanctioned by upstream.  By not using their name we hope to make this distinction as clear as possible
https://www.scientificlinux.org/about/


  • What is Windows Subsystem for Linux (WSL)?
The Windows Subsystem for Linux (WSL) is a new Window 10 feature that enables you to run native Linux command-line tools directly on Windows, alongside your traditional Windows desktop and modern store apps.

Who is this for?
This is primarily a tool for developers -- especially web developers and those who work on or with open source projects. This allows those who want/need to use Bash, common Linux tools (sedawk, etc.) and many Linux-first tools (Ruby, Python, etc.) to use their toolchain on Windows.

You can also access your local machine’s filesystem from within the Linux Bash shell – you’ll find your local drives mounted under the /mnt folder.

Why would I use WSL rather than Linux in a VM?
WSL requires fewer resources (CPU, memory, and storage) than a full virtual machine. WSL also allows you to run Linux command-line tools and apps alongside your Windows command-line, desktop and store apps, and to access your Windows files from within Linux. This enables you to use Windows apps and Linux command-line tools on the same set of files if you wish.


Can I run ALL Linux apps in WSL?
No! WSL is a tool aimed at enabling users who need them to run Bash and core Linux command-line tools on Windows.
WSL does not aim to support GUI desktops or applications (e.g. Gnome, KDE, etc.)

https://docs.microsoft.com/en-us/windows/wsl/faq

  • What is Windows Subsystem for Linux (WSL)?
The Windows Subsystem for Linux (WSL) is a new Windows 10 feature that enables you to run native Linux command-line tools directly on Windows, alongside your traditional Windows desktop and modern store apps.

Who is this for?
This is primarily a tool for developers -- especially web developers and those who work on or with open source projects. This allows those who want/need to use Bash, common Linux tools (sedawk, etc.) and many Linux-first tools (Ruby, Python, etc.) to use their toolchain on Windows.

You can also access your local machine’s filesystem from within the Linux Bash shell – you’ll find your local drives mounted under the /mnt folder.

Why would I use WSL rather than Linux in a VM?
WSL requires fewer resources (CPU, memory, and storage) than a full virtual machine. WSL also allows you to run Linux command-line tools and apps alongside your Windows command-line, desktop and store apps, and to access your Windows files from within Linux. This enables you to use Windows apps and Linux command-line tools on the same set of files if you wish.


Can I run ALL Linux apps in WSL?
No! WSL is a tool aimed at enabling users who need them to run Bash and core Linux command-line tools on Windows.
WSL does not aim to support GUI desktops or applications (e.g. Gnome, KDE, etc.)

https://docs.microsoft.com/en-us/windows/wsl/faq

  • The minimal iso image will download packages from online archives at installation time instead of providing them on the install media itself. Downloading packages at install time reduces the size of the iso image to approximately ~40MB depending on architecture
https://help.ubuntu.com/community/Installation/MinimalCD

  • Test Websites In Internet Explorer 9, 8 and 7 Under Linux / Mac OSX
Microsoft has created some customized Windows VHDs with the purpose of allowing web designers to test websites in Internet Explorer 9, 8 and 7, for free.the biggest disadvantage is the disk space required by these VHDs (as well as a large download size: 2,6 GB for IE7 and 4,1 GB for IE8 and IE9). If you want to run all 3 Internet Explorer versions supported (7, 8, 9), you'll need almost 45 GB of disk space.
http://www.webupd8.org/2011/09/test-websites-in-internet-explorer-9-8.html

  • Virtualizing Internet Explorer 7 or 8 with ThinApp (1026674)
You cannot virtualize Internet Explorer 7 or Internet Explorer 8 in the way that you can virtualize Internet Explorer 6 in ThinApp 4.6 and later. ThinApp 4.6 introduced a special template only for Internet Explorer 6. Virtualize IE7 or IE8 without regard to the special way of virtualizing IE6 in ThinApp 4.6.
https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1026674


  • How to Run Internet Explorer 7, 8 and 9 in Linux with or without Wine

IEs4Linux
IE's popularity has exasperated professional web developers ever since the internet became more than an academic curio in the late 90s. Other than simple dual booting, the first option for those who wanted to test their sites on IE was tatanka's IEs4Linux, which uses wine to run the IE web installer.

winetricks
winetricks can install IE 6, 7 and 8 using the Windows installer in much the same way as IEs4Linux does above

VirtualBox
This leaves the final and most satisfactory option - running Windows in a Virtual Machine with an image containing the IE version you're targeting. You may think this also requires a paid-for licence from Redmond, but this is not the case. Microsoft itself has made virtual machine images freely available for each of IE 6, 7, 8 and 9 to try and encourage web designers and developers to support their tatty software.
http://www.rdeeson.com/weblog/126/how-to-run-internet-explorer-7-8-and-9-in-linux-with-or-without-wine.html

  • How to Install IEs4Linux under Wine in Ubuntu 12.04
decided to choose IEs4Linux. Before you can install IEs4Linux, you have to make sure that you’ve already installed Wine and cabextract package on your system.
http://el.web.id/how-to-install-ies4linux-under-wine-in-ubuntu-12-04-207

  • The result of this would be two distributions, our rolling release (Tumbleweed) which is continually updated and a stable release (Leap) which is upgraded with new versions.
https://en.opensuse.org/openSUSE:Leap

  • The Tumbleweed distribution is a pure rolling release version of openSUSE containing the latest stable versions of all software instead of relying on rigid periodic release cycles. The project does this for users that want the newest, but stable software.
https://en.opensuse.org/Portal:Tumbleweed

Cyberinfrastructure

  • United States federal research funders use the term cyberinfrastructure to describe research environments that support advanced data acquisition, data storage, data management, data integration, data mining, data visualization and other computing and information processing services distributed over the Internet beyond the scope of a single institution. In scientific usage, cyberinfrastructure is a technological and sociological solution to the problem of efficiently connecting laboratories, data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge.
https://en.wikipedia.org/wiki/Cyberinfrastructure


  • Cyberinfrastructure (CI) is not a new technology, per se, or merely a better, faster Internet. CI merges technology, data, and human resources into a seamless whole. While processors, storage devices, sensors, and other physical assets are part of CI, it is more than  connecting  people  with  advanced  networks  and  sophisticated  applications  running  on  powerful  computer  systems—it  isinvolving those people as participants in the generation of knowledge,  giving  them  the  opportunity  to  share  expertise,  tools,  andfacilities.

CI—which  is  known  as e-research,  e-science,  and  e-infrastructure  in  Europe,  Australia, and  Asia—brings  together  high-performance  computing,  remote sensors, large data sets, middleware, and sophisticated applications (modeling, simulation, visualization).

CI depends on a technical infrastructure that knits together high-speed networks with high-performance, high-availability, and high-reliability computational resources
http://www.sc.edu/about/offices_and_divisions/division_of_information_technology/docs/ci_7_things.pdf

Information Exchange Gateway

  • An IEG is a system designed to facilitate secure communication between different security and management domains.
An IEG is a solution for effecting information sharing between different security and information domains by providing a managed set of information exchange services.
https://cybermatters.info/2015/01/19/introduction-information-exchange-gateways


  • An Information Exchange Gateway (IEG) is a system designed to enable the flow of information
between networks whilst at the same time protecting an internal domain from both inbound
malware threats and outbound leakage of  sensitive information.  An IEG consists of a number of
components implemented within a De-Militarised Zone (DMZ). The IEG hides the internal domain from the outside world, only exposing interfaces for the required information exchange.
Where IEGs are implemented between two networks with a mutual distrust, a pair of IEGs is required, each one protecting its own network and connected to each other via a Wide Area
Network (WAN).
https://www.deep-secure.com/wp-content/uploads/2015/08/IEG-Paper-Implementing-Deep-Secure-guards-in-NATO-IEGs.pdf

  • The Guard terminates network connections and extracts the XML requests and responses from them. It verifies the content is acceptable before using a new connection to deliver the data.
https://www.deep-secure.com/wp-content/uploads/2014/06/xml-guard-brochure1.pdf

  • The Deep-Secure XML Guard, a deceptively simple-looking little red box, offers businesses unparalleled security and protection from known and unknown forms of cyber attack.  The software in the new Deep-Secure XML Guard protects companies not by trying to look for viruses or removing malicious code that can be hidden within normal business data but by extracting the business data and sending it on its way in a safe and trusted format.An example of the application of a Deep-Secure XML Guard is to protect a log management system such as Assuria’s Assuria Log Manager
https://www.deep-secure.com/wp-content/uploads/2014/04/deep-secure-guard-assuria-log-manager.pdf

Configuration Management

  • Ansible is a powerful automation engine that makes systems and apps simple to deploy.
http://www.ansible.com

  • Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.
http://docs.ansible.com/ansible/latest/index.html
  • Every team in your organization will benefit from Ansible Tower. For Network teams, Ansible Tower enables:

    Security: Store Network Credentials
    Delegation: Using Role-Based Access Control (RBAC)
    Power: Leverage the Ansible Tower API
    Control: Schedule Jobs for Automated Playbook Runs
    Flexibility: Launch Job Templates Using Surveys
    Integrations: Leverage Ansible Tower Integrations like Version Control
    Compliance: Run Jobs in Check Mode for Audits
https://www.ansible.com/integrations/networks
  • AWX provides a web-based user interface, REST API, and task engine built on top of Ansible. It is the upstream project for Tower, a commercial derivative of AWX.
https://github.com/ansible/awx

  • When Ansible, Inc. was acquired by Red Hat, we told our users that we would open the source code for Ansible Tower. The AWX Project is a fulfillment of that intent.
https://www.ansible.com/products/awx-project/faq


  • What’s the difference between AWX and Ansible Tower?
AWX is designed to be a frequently released, fast-moving project where all new development happens.
Ansible Tower is produced by taking selected releases of AWX, hardening them for long-term supportability, and making them available to customers as the Ansible Tower offering.
https://www.ansible.com/products/awx-project/faq
  • The result of running the Task was "changed": false. This shows that there were no changes; I had already installed Nginx. I can run this command over and over without worrying about it affecting the desired result.
From a RESTful service standpoint, for an operation (or service call) to be idempotent, clients can make that same call repeatedly while producing the same result. In other words, making multiple identical requests has the same effect as making a single request. Note that while idempotent operations produce the same result on the server (no side effects), the response itself may not be the same (e.g. a resource's state may change between requests)
https://serversforhackers.com/c/an-ansible-tutorial

  • Why Ansible?
It’s agentless.  Unlike Puppet, Chef, Salt, etc.. Ansible operates only over SSH (or optionally ZeroMQ), so there’s none of that crap PKI that you have to deal with using Puppet.
It’s Python. I like Python.  I’ve been using it far longer than any other language.
It’s self-documenting,  Simple YAML files describing the playbooks and roles.
It’s feature-rich.  Some call this batteries included, but there’s over 150 modules provided out of the box, and new ones are pretty easy to write.
http://tomoconnor.eu/blogish/getting-started-ansible/#.WpUNBudRWUk

  • First of all Ansible needs to know the hosts its going to serve. They can be managed on the central server in /etc/ansible/hosts or in a file configured in the shell variable ANSIBLE_HOSTS. The hosts can be listed as IP addresses or host names, and can contain additional information like user names, ssh port and so on

https://liquidat.wordpress.com/2014/02/17/howto-first-steps-with-ansible/

  • Ansible uses Jinja2 templating to enable dynamic expressions and access to variables. Ansible greatly expands the number of filters and tests available, as well as adding a new plugin type: lookups.
Please note that all templating happens on the Ansible controller before the task is sent and executed on the target machine.
http://docs.ansible.com/ansible/latest/playbooks_templating.html


  • Ansible has two modes of operation for reusable content: dynamic and static.

    Ansible pre-processes all static imports during Playbook parsing time.
    Dynamic includes are processed during runtime at the point in which that task is encountered.

When it comes to Ansible task options like tags and conditional statements (when:):

    For static imports, the parent task options will be copied to all child tasks contained within the import.
    For dynamic includes, the task options will only apply to the dynamic task as it is evaluated, and will not be copied to child tasks.


http://docs.ansible.com/ansible/latest/playbooks_reuse.html


  • Filters in Ansible are from Jinja2, and are used for transforming data inside a template expression
Take into account that templating happens on the Ansible controller, not on the task’s target host, so filters also execute on the controller as they manipulate local data.

http://docs.ansible.com/ansible/latest/playbooks_filters.html

  • Ansible works by configuring client machines from an computer with Ansible components installed and configured. It communicates over normal SSH channels in order to retrieve information from remote machines, issue commands, and copy files. Because of this, an Ansible system does not require any additional software to be installed on the client computers. This is one way that Ansible simplifies the administration of servers. Any server that has an SSH port exposed can be brought under Ansible's configuration umbrella, regardless of what stage it is at in its life cycle.
Ansible for CentOS 7

Step 1 — Installing Ansible
sudo yum install epel-release
sudo yum install ansible

Step 2 — Configuring Ansible Hosts
sudo vi /etc/ansible/hosts

Ansible will, by default, try to connect to remote hosts using your current username. If that user doesn't exist on the remote system, a connection attempt will result in this error

Let's specifically tell Ansible that it should connect to servers in the "servers" group with the sammy user. Create a directory in the Ansible configuration structure called group_vars.
sudo mkdir /etc/ansible/group_vars

Within this folder, we can create YAML-formatted files for each group we want to configure:
sudo nano /etc/ansible/group_vars/servers

If you want to specify configuration details for every server, regardless of group association, you can put those details in a file at /etc/ansible/group_vars/all. Individual hosts can be configured by creating files under a directory at /etc/ansible/host_vars.


Step 3 — Using Simple Ansible Commands
Ping all of the servers you configured
This is a basic test to make sure that Ansible has a connection to all of its hosts.

ansible -m ping all

You can also specify an individual host:
ansible -m ping host1
ansible -m ping host1:host2

The shell module lets us send a terminal command to the remote host and retrieve the results. For instance, to find out the memory usage on our host1 machine
ansible -m shell -a 'free -m' host1

Usage: ansible <host-pattern> [options]
-m MODULE_NAME, --module-name=MODULE_NAME

https://www.digitalocean.com/community/tutorials/how-to-install-and-configure-ansible-on-centos-7

  • Tags
If you have a large playbook it may become useful to be able to run a specific part of the configuration without running the whole playbook.
http://docs.ansible.com/ansible/latest/playbooks_tags.html

  • Tests in Jinja2 are a way of evaluating template expressions and returning True or False
http://docs.ansible.com/ansible/latest/playbooks_tests.html

  • Lookup plugins allow access of data in Ansible from outside sources. Like all templating, these plugins are evaluated on the Ansible control machine, and can include reading the filesystem but also contacting external datastores and services
http://docs.ansible.com/ansible/latest/playbooks_lookups.html

  • Let's say that we want to change the configuration of nginx. We will use simplest way and we will just replace whole nginx.conf file. Inside template directory create a file and name it for instance nginx.conf.j2  J2 is extension of Jinja2 templating language that Ansible is using.
https://blacksaildivision.com/ansible-tutorial-part-3

  • Templates are processed by the Jinja2 templating language (http://jinja.pocoo.org/docs/) - documentation on the template formatting can be found in the Template Designer Documentation (http://jinja.pocoo.org/docs/templates/).

http://docs.ansible.com/ansible/latest/template_module.html

  • Jinja2 is a modern and designer-friendly templating language for Python, modelled after Django’s templates. It is fast, widely used and secure with the optional sandboxed template execution environment:
Features:
    sandboxed execution
    powerful automatic HTML escaping system for XSS prevention
    template inheritance
    compiles down to the optimal python code just in time
    optional ahead-of-time template compilation
    easy to debug. Line numbers of exceptions directly point to the correct line in the template.
    configurable syntax
http://jinja.pocoo.org/docs/2.10/

  • A Jinja template is simply a text file. Jinja can generate any text-based format (HTML, XML, CSV, LaTeX, etc.). A Jinja template doesn’t need to have a specific extension: .html, .xml, or any other extension is just fine.
 The template syntax is heavily inspired by Django and Python.
http://jinja.pocoo.org/docs/2.10/templates/

  • You can point Ansible's configuration (ansible.cfg) to a vault-password-file that is outside of your repository.
vault_password_file = ~/.ansible_vault
https://opensource.com/article/16/12/devops-security-ansible-vault


  • A typical use of Ansible Vault is to encrypt variable files

    Files within the group_vars directory
    A role's defaults/main.yml file
    A role's vars/main.yml file
    Any other file used to store variables.
https://serversforhackers.com/c/how-ansible-vault-works



  • Ansible is a configuration management and provisioning tool, similar to Chef, Puppet or Salt.

Ansible uses "Facts", which is system and environment information it gathers ("context") before running Tasks.

Modules
Ansible uses "modules" to accomplish most of its Tasks. Modules can do things like install software, copy files, use templates and much more.

A Handler is exactly the same as a Task (it can do anything a Task can), but it will only run when called by another Task. You can think of it as part of an Event system; A Handler will take an action when called by an event it listens for.

Roles are good for organizing multiple, related Tasks and encapsulating data needed to accomplish those Tasks.

https://serversforhackers.com/c/an-ansible2-tutorial

  • Under the [ssh_connection] header, the following settings are tunable for SSH connections. OpenSSH is the default connection type for Ansible on OSes that are new enough to support ControlPersist. (This means basically all operating systems except Enterprise Linux 6 or earlier).
http://docs.ansible.com/ansible/latest/intro_configuration.html#ssh-args

  • Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks.
Describe your application in a single YAML file and, rather than using a Dockerfile
With Ansible Container, you no longer have to build and configure containers differently than you do traditional virtual machines or bare-metal systems
You can now apply the power of Ansible and re-use your existing Ansible content for your containerized ecosystem.
http://docs.ansible.com/ansible-container/


  • It utilizes existing Ansible roles that can be turned into container images and can even be used for the complete application lifecycle, from build to deploy in production.

1. Shell scripts embedded in Dockerfiles.
2. You can't parse Dockerfiles easily.
The biggest shortcoming of Dockerfiles in comparison to Ansible is that Ansible, as a language, is much more powerful.
https://opensource.com/article/17/10/dockerfiles-ansible-container


  • Getting Ansible Container

Prerequisites:

    Python 2.7 or Python 3.5
    pip
    setuptools 20.0.0+
http://docs.ansible.com/ansible-container/installation.html

  • Ansible Container needs to communicate with the docker service through its local socket. The following commands change the socket ownership, and add you to a docker group that can access the socket:

Ansible Container enables you to build container images and orchestrate them using only Ansible playbooks. The application is described in a single YAML file, and instead of using a Dockerfile, lists Ansible roles that make up the container images.
To install it, use the python3 virtual environment module.
Ansible Container provides three engines: Docker, Kubernetes and Openshift.
https://fedoramagazine.org/build-test-applications-ansible-container/

  • The main advantage of the Ansible Local provisioner in comparison to the Ansible (remote) provisioner is that it does not require any additional software on your Vagrant host.
On the other hand, Ansible must obviously be installed on your guest machine(s).

install (boolean) - Try to automatically install Ansible on the guest system.This option is enabled by default
https://www.vagrantup.com/docs/provisioning/ansible_local.html

  • What’s an ad-hoc command?
An ad-hoc command is something that you might type in to do something really quick, but don’t want to save for later.
This is a good place to start to understand the basics of what Ansible can do prior to learning the playbooks language – ad-hoc commands can also be used to do quick things that you might not necessarily want to write a full playbook for.
http://docs.ansible.com/ansible/latest/user_guide/intro_adhoc.html

  • If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html


  • Galaxy, is a free site for finding, downloading, and sharing community developed roles.
Downloading roles from Galaxy is a great way to jumpstart your automation projects.
http://docs.ansible.com/ansible/latest/reference_appendices/galaxy.html


  • Simply put, roles are a further level of abstraction that can be useful for organizing playbooks. As you add more and more functionality and flexibility to your playbooks, they can become unwieldy and difficult to maintain as a single file. Roles allow you to create very minimal playbooks that then look to a directory structure to determine the actual configuration steps they need to perform.


Organizing things into roles also allows you to reuse common configuration steps between different types of servers. This is already possible by "including" other files within a playbook, but with roles, these types of links between files are automatic based on a specific directory hierarchy.

In general, the idea behind roles is to allow you to define what a server is supposed to do, instead of having to specify the exact steps needed to get a server to act a certain way.
https://www.digitalocean.com/community/tutorials/how-to-use-ansible-roles-to-abstract-your-infrastructure-environment

  • Provisioning

Your apps have to live somewhere. Whether you’re PXE-booting and kickstarting bare-metal servers, creating virtual machines (VMs), or deploying public, private, or hybrid cloud instances from templates, Red Hat® Ansible® Automation helps streamline the process.

Configuration management
Centralizing configuration file management and deployment is a common use case

Application deployment
From development to production, playbooks make app installation, upgrades, and management repeatable and reliable

Continuous delivery
Updates without downtime
Orchestrate server configuration in batches—including load balancing, monitoring, and the availability of network or cloud services—to roll changes across your environments without disrupting business.

Security & compliance
When you define your security policy in Ansible, scanning, and remediation sitewide can be integrated into other automated processes
Scan jobs and system tracking help you immediately see any systems that deviate from defined parameters.
https://www.redhat.com/en/technologies/management/ansible/use-cases


  • Out-of-the-box, Travis-CI doesn’t support CentOS, as its test environment is Ubuntu-based. However, Travis-CI allows you to set up a Docker container and this opens up all kinds of possibilities.

It is mainly used for running tests on applications, but it has seen used for infrastructure testing as well.
Travis-CI: it’s free for open source projects and it integrates nicely with Github so that on every push and submitted pull request a test run is triggered
During a test run, a VM is booted and a script called .travis.yml is executed. This contains the necessary steps to configure the system, install dependencies and run the actual test code.


http://bertvv.github.io/notes-to-self/2015/12/11/testing-ansible-roles-with-travis-ci-part-1-centos/

  • Saltstack

Automation for enterprise IT ops, event-driven data center orchestration and the most flexible configuration management for DevOps at scale
https://saltstack.com

Salt is a different approach to infrastructure management, founded on the idea that high-speed communication with large numbers of systems can open up new capabilities.
This approach makes Salt a powerful multitasking system that can solve many specific problems in an infrastructure.
The backbone of Salt is the remote execution engine, which creates a high-speed, secure and bi-directional communication net for groups of systems.
On top of this communication system, Salt provides an extremely fast, flexible, and easy-to-use configuration management system called Salt States
https://docs.saltstack.com/en/latest/topics/tutorials/walkthrough.html

  • Terminology
SaltStack uses a few keywords which represent a particular device or configuration, as explained below:
Master
This is the master instance which connects to all servers added to your SaltStack "cluster", thus also running any commands / communication to your servers.
Minion
The servers which are added to your SaltStack are called minions. Any actions are either performed on one, a group, or all of your minions.
Formula
A formula represents a file or a set of files which introduces the minions which commands that should be performed. This can be the installation of a single application such as nginx or rolling out configuration files, etc.
Pillar
A pillar is a file which stores information related to a group of minions or a single minion. As an example, you would use this sort of file for storing the "Virtual-Hosts" for Nginx for a particular minion.

SaltStack is based on Python, you can easily add your own modules too, if you are fluent with the language.
https://www.vultr.com/docs/getting-started-with-saltstack-on-ubuntu-17-04

  • salt-ssh and salt-cloud packages for resource control and for documentations we will install salt-doc package
https://www.unixmen.com/install-and-configure-saltstack-server-in-ubuntu-14-04-x64/

  • You can use Salt agentless to run Salt commands on a system without installing a Salt minion. The only requirements on the remote system are SSH and Python.

When running in agentless mode, Salt:

    Connects to the remote system over SSH.
    Deploys a thin version of Salt to a temp directory, including any required files.
    Runs the specified command(s).
    (Optional) Cleans up the temp directory.

You can use Salt agentless in conjunction with a master-minion environment, or you can manage all of your systems agentless.
SaltStack is a revolutionary approach to infrastructure management that replaces complexity with speed. SaltStack is simple enough to get running in minutes, scalable enough to manage tens of thousands of servers, and fast enough to communicate with each system in seconds.
https://docs.saltstack.com/en/getstarted/ssh/index.html

  • Saltstack, a strong configuration management written in Python and using ZeroMQ for dial with servers (called minions).
https://opsnotice.xyz/docker-with-saltstack/


  • We need to inform him that it will use the standalone mode (and not the master-client mode). For this, edit the

/etc/salt/minion file :
file_client: local
https://opsnotice.xyz/docker-with-saltstack/

  • Standalone Minion
Since the Salt minion contains such extensive functionality it can be useful to run it standalone. A standalone minion can be used to do a number of things:
    Use salt-call commands on a system without connectivity to a master
    Masterless States, run states entirely from files local to the minion
    When running Salt in masterless mode, do not run the salt-minion daemon. Otherwise, it will attempt to connect to a master and fail.
https://docs.saltstack.com/en/latest/topics/tutorials/standalone_minion.html#tutorial-standalone-minion

  • Salt Proxy Minion

Proxy minions are a developing Salt feature that enables controlling devices that, for whatever reason, cannot run a standard salt-minion. Examples include network gear that has an API but runs a proprietary OS, devices with limited CPU or memory, or devices that could run a minion, but for security reasons, will not.
https://docs.saltstack.com/en/latest/topics/proxyminion/index.html

  • Network Automation
For these reasons, most network devices can be controlled only remotely via proxy minions or using the Salt SSH. However, there are also vendors producing white-box equipment (e.g. Arista, Cumulus) or others that have moved the operating system in the container (e.g. Cisco NX-OS, Cisco IOS-XR), allowing the salt-minion to be installed directly on the platform.

NAPALM (Network Automation and Programmability Abstraction Layer with Multivendor support) is an open source Python library that implements a set of functions to interact with different router vendor devices using a unified API. Begin vendor-agnostic simplifies the operations, as the configuration and the interaction with the network device does not rely on a particular vendor
https://docs.saltstack.com/en/latest/topics/network_automation/index.html


  • Salt SSH
Execute salt commands and states over ssh without installing a salt-minion.
Salt SSH is very easy to use, simply set up a basic roster file of the systems to connect to and run salt-ssh commands in a similar way as standard salt command
https://docs.saltstack.com/en/latest/topics/ssh/index.html

  • For example, if you want to set up a load balancer in front of a cluster of web servers you can ensure the load balancer is set up first, and then the same matching configuration is applied consistently across the whole cluster.
The orchestration is the way to do this.
https://docs.saltstack.com/en/latest/topics/orchestrate/orchestrate_runner.html

  • Beacons
Beacons let you use the Salt event system to monitor non-Salt processes. The beacon system allows the minion to hook into a variety of system processes and continually monitor these processes. When monitored activity occurs in a system process, an event is sent on the Salt event bus that can be used to trigger a reactor.
https://docs.saltstack.com/en/latest/topics/beacons/index.html

  • Reactor System
Salt's Reactor system gives Salt the ability to trigger actions in response to an event. It is a simple interface to watching Salt's event bus for event tags that match a given pattern and then running one or more commands in response.
https://docs.saltstack.com/en/latest/topics/reactor/index.html

  • Event System
The Salt Event System is used to fire off events enabling third-party applications or external processes to react to behavior within Salt. The event system uses a publish-subscribe pattern, otherwise, know as pub/sub.
https://docs.saltstack.com/en/latest/topics/event/events.html

  • Configuration Management
Salt contains a robust and flexible configuration management framework, which is built on the remote execution core.
https://docs.saltstack.com/en/latest/topics/states/index.html

  • States
    Express the state of a host using small, easy to read, easy to understand configuration files. No programming required.
    A full list of states
        Contains: list of install packages, create users, transfer files, start services, and so on.
    Pillar System
        Contains: description of Salt's Pillar system.
    Highstate data structure
        Contains: a dry vocabulary and technical representation of the configuration format that states represent.
    Writing states
        Contains: a guide on how to write Salt state modules, easily extending Salt to directly manage more software.

Renderers

    Renderers use state configuration files written in a variety of languages, templating engines, or files. Salt's configuration management system is, under the hood, language agnostic
https://docs.saltstack.com/en/latest/topics/states/index.html


  • Storing Static Data in the Pillar
Pillar is an interface for Salt designed to offer global values that can be distributed to minions. Pillar data is managed in a similar way as the Salt State Tree.
https://docs.saltstack.com/en/latest/topics/pillar/index.html#pillar

  • The Salt State Tree
A state tree is a collection of SLS files and directories that live under the directory specified in file_roots.
The main state file that instructs minions what environment and modules to use during state execution.
https://docs.saltstack.com/en/latest/ref/states/highstate.html#states-highstate

  • Grains
Salt comes with an interface to derive information about the underlying system. This is called the grains interface because it presents salt with grains of information. Grains are collected for the operating system, domain name, IP address, kernel, OS type, memory, and many other system properties.
https://docs.saltstack.com/en/latest/topics/grains/index.html

  • Understanding Jinja
Jinja is the default templating language in SLS files.
https://docs.saltstack.com/en/latest/topics/jinja/index.html

  • Setting up the Salt State Tree
States are stored in text files on the master and transferred to the minions on demand via the master's File Server.
Running the Salt States and Commands in Docker Containers
Salt introduces the ability to execute the Salt States and Salt remote execution commands directly inside of Docker containers.
This addition makes it possible to not only deploy fresh containers using the Salt States.
This also allows for running containers to be audited and modified using Salt, but without running a Salt Minion inside the container
Some of the applications include security audits of running containers as well as gathering operating data from containers.
This new feature is simple and straightforward and can be used via a running Salt Minion, the Salt Call command, or via Salt SSH
https://docs.saltstack.com/en/latest/topics/tutorials/docker_sls.html#docker-sls


  • Both Salt and Ansible are originally built as execution engines. That is, they allow executing commands on one or more remote systems, in parallel if you want.
Ansible supports executing arbitrary command line commands on multiple machines. It also supports executing modules. An Ansible module is basically a Python module written in a certain Ansible friendly way. Most standard Ansible modules are idempotent. This means you tell them the state you'd want your system to be in, and the module tries to make the system look like that.
A playbook can vary the hosts' modules are executed on. This makes it possible to orchestrate multiple machines, such as take them out of load balancers before upgrading an application.


Salt has two types of modules; execution modules and state modules. Execution modules are modules simply executes something, it could be a command line execution, or downloading a file. A state module is more like an Ansible module, where the arguments define a state and the module tries to fulfill that end state. In general, state modules are using execution modules to most of their work

The state module also supports defining states in files, called SLS files. Which states to apply to which hosts is defined in a top.sls file.

Both playbooks and SLS files (usually) are written in YAML.

Salt is built around a Salt master and multiple Salt minions that are connecting to the master when they boot. Generally, commands are issued on the master command line. The master then dispatches those commands out to minions.
Initially, minions initiate a handshake consisting of a cryptographic key exchange and after that, they have a persistent encrypted TCP connection
The minions also cache various data to make execution faster.

Ansible is masterless and it uses SSH as its primary communication layer
This means it is slower, but being masterless might make it slightly easier to run setup and test Ansible playbooks

Salt also supports using SSH instead of ZeroMQ using Salt SSH.

Ansible is always using SSH for initiating connections. This is slow. Its ZeroMQ implementation (mentioned earlier) does help, but initialization is still slow. Salt uses ZeroMQ by default, and it is _fast_.

While talking about testing... DevOps people loves Vagrant. Until recently I had not worked with it. Vagrant comes with provisioning modules both for Salt and Ansible. This makes it a breeze to get up and running with a master+minion in Vagrant, or executing a playbook on startup.

Salt can run in masterless mode. This makes it easier to get it up and running. However, for production (and stability) I recommend getting an actual master up and running.
To me, Ansible was a great introduction to automated server configuration and deployment. It was easy to get up and running and has great documentation.
Moving forward, the scalability, speed and architecture of Salt has it going for it. For cloud deployments I find the Salt architecture to be a better fit.

http://jensrantil.github.io/salt-vs-ansible.html
  • Puppet

Puppet Enterprise provides a unified approach to automation. With a single solution, you can manage heterogeneous environments—physical, virtual, or cloud—and automate the management of computing, storage, and network resources. Here are some of the integrations we provide to help you automate all aspects of your infrastructure.
http://puppetlabs.com/solutions


  • Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.

https://puppetlabs.com/puppet/what-is-puppet/

  • Chef

Chef is built to address the hardest infrastructure challenges on the planet. By modeling IT infrastructure and application delivery as code, Chef provides the power and flexibility to compete in the digital economy.
http://www.opscode.com/chef/


  • Flywaydb
Evolve your Database Schema easily and reliably across all your instances
https://flywaydb.org

  • CFEngine
Automate large-scale, complex and mission critical IT infrastructure
https://cfengine.com/