- In computer programming and software testing, smoke testing is preliminary testing to reveal simple failures severe enough to reject a prospective software release.
For example, a smoke test may ask basic questions like "Does the program run?", "Does it open a window?", or "Does clicking the main button do anything?"
The purpose is to determine whether the application is so badly broken that further testing is unnecessary.
As the book, "Lessons Learned in Software Testing" puts it,
if key features don't work or if
http://en.wikipedia.org/wiki/Smoke_testing
- Smoke Testing, also known as “Build Verification Testing”, is a
type of software testing thatcomprises of a non-exhaustive set of tests that aim at ensuring that the most important functions work.The results of this testing is used to decide if a build is stable enough to proceed with further testing.
LEVELS APPLICABLE TO
http://softwaretestingfundamentals.com/smoke-testing/
- New Build
is checked mainly for two things:
Build acceptance
Some BVT basics:
It is a subset of tests that verify main functionalities.
The advantage of BVT is it saves the efforts of a test team to set up and test a build when major functionality
Design BVTs carefully enough to cover basic functionality.
Typically BVT should not run
BVT is a
http://www.softwaretestinghelp.com/bvt-build-verification-testing-process/
Molecule is designed to aid in the development and testing of Ansible roles. Molecule provides support for testing with multiple instances, operating systems and distributions, virtualization providers, test frameworks and testing scenarios.
https://molecule.readthedocs.io/en/latest/
- Today, we have over 98 Ansible roles of which 40 are "common" roles that
are reused by other roles. One problem we wound up hitting along the way: A roleisn't touched for months and when someone finally dusts it off to make some changes, they discover it is woefully out of date. Or even worse, a common role maybe changed and a role that depends on itisn't updated to reflect the new changes.
Untested code that
It integrates with Docker, Vagrant, and OpenStack to run roles in a virtualized environment and works with popular infra testing tools like
When adhering to TDD, one major recommended practice is to follow a cycle of “red, green,
https://zapier.com/engineering/ansible-molecule/
Molecule is generally used to test roles in isolation. However, it can also test roles from a monolith repo.
https://molecule.readthedocs.io/en/latest/examples.html#docker
- This project on has 2
linux instances and the rest are solely windows server so it wasa bit different to whati was used to. what follows will be one example of how to get molecule running for alinux instance and also one for a windows instance.
this would
https://medium.com/@elgallego/ansible-role-testing-molecule-7a64f43d95cb
- within the molecule directory you have a directory named default, that default directory is a scenario and you can create as many scenarios as you like, and also share scenarios between different roles.
https://medium.com/@elgallego/molecule-2-x-tutorial-5dc1ee6b29e3
Therefore Molecule introduces a consecutive list of steps that run one by one. Three
Molecule supports a wide range of infrastructure providers & platforms
Molecule also supports different verifiers: right now these are Goss,
As we use Vagrant as infrastructure provider for Molecule, we also need to install the python-vagrant pip package
The crucial directories inside our role named docker are tasks and molecule/default/tests. The first will contain our Ansible role we want to develop
With
We could destroy the machine with molecule destroy, but we want to write and execute a test case in the next section. So we leave it in the created state
- But using Vagrant also has its downsides. Although we get the best experience in terms of replication of our staging and production environments, the preparation step takes
relatively long to complete since a whole virtual machine has tobe downloaded and booted. Andthat´s nothing new. Docker has been here for a long time now to overcome this downside.
And
. But we also don´t want to abandon Vagrant completely since there are maybe situations where we want to test our Ansible roles against full-blown virtual machines. Luckily Molecule can help us. 🙂 We
a new Ansible role with Molecule support build in using the molecule init role command. Besides the standard Ansible role directories, this places a molecule folder inside the role skeleton. This
Molecule scenarios enable us to split the parts of our test suites into two kinds. The first one is scenario-specific and
As Docker is the default infrastructure provider in Molecule, I
As you may note,
https://blog.codecentric.de/en/2018/12/continuous-infrastructure-ansible-molecule-travisci/
- A
systemd enabled Docker image mustbe used for Ubuntu and CentOS
You might have to write a
You may
Writing tests first means you’re taking (business) requirements and mapping them into business tests
You can have a CI/CD process automatically run the tests and halt any delivery pipelines on failure, preventing faulty code reaching production systems
https://blog.opsfactory.rocks/testing-ansible-roles-with-molecule-97ceca46736a
- Testing Ansible roles with Molecule,
Testinfra and Docker
a Python clone of Test Kitchen, but more specific to Ansible: Molecule
Unlike Test kitchen with the
With Molecule you can make use of
These were the basics for testing an Ansible role with Molecule, Docker and Test Infra.
https://werner-dijkerman.nl/2016/07/10/testing-ansible-roles-with-molecule-testinfra-and-docker/
- I then discovered Ansible which
I was attracted to for its simplicityin comparison to writing Ruby code with Chef.
I immediately re-wrote
I immediately re-wrote
I've since discovered Molecule which allows me to write Ansible in a very similar way to how I used to write Chef.
I write tests first in
In this post I hope to show how you can use Molecule to TDD a simple Ansible role which installs Ruby from source.
https://hashbangwallop.com/tdd-ansible.html
- With
Testinfra you can write unit tests in Python to test actual state of your servers configured by management tools like Salt, Ansible, Puppet, Chef and so on.
https://github.com/philpep/testinfra
- Testing Ansible roles in a cluster setup with Docker and Molecule
I have a Jenkins job that validates a role that
https://werner-dijkerman.nl/2016/07/31/testing-ansible-roles-in-a-cluster-setup-with-docker-and-molecule/
Molecule is designed to aid in the development and testing of Ansible roles.
If Ansible can use it, Molecule can test it. Molecule
https://molecule.readthedocs.io/en/latest/index.html
- Continuous deployment of Ansible Roles
There are a lot of articles about Ansible with continuous deployment, but these are only about using Ansible as a tool to do continuous deployment
Application developers write unit tests on their code and the application
A
I also have 1 git repository that contains all
I have a Jenkins running with the Docker plugin and once
Jenkins Jobs
All my Ansible roles has 3
Molecule Tests
Staging deployment
Production deployment
Molecule Tests
The first job is that
We get the latest
I use separate stages with single commands so I can quickly see on which part the job fails and focus on that immediately without going to the console output and scrolling down to see where it fails.
After the Molecule verify stage,
Staging deployment
The goal for this job is to deploy the role to
The first stage is to checkout 2 git repositories: The Ansible Role
The 2nd Stage is to install the required applications, so not very interesting. The 3rd stage is to execute the playbook. In my “environment” repository (That holds all Ansible data)
The 4th stage is to execute the
In this job we create a new tag ${
Production deployment
This is the job that deploys the Ansible role to the rest of the servers.
With the 3rd stage “Execute role on host
This deployment will fail or succeed with the quality of your tests.
https://werner-dijkerman.nl/author/wdijkerman/
- Testing Ansible roles with Molecule
When you're developing an automation, a typical workflow would start with a new virtual machine. I will use Vagrant to illustrate this idea, but you could use
What can
Infrastructure is up and running from the user's point of view (e.g., HTTPD or Nginx is answering requests, and MariaDB or
OS service
A process is listening on a specific port
A process is answering requests
Configuration files
Virtually anything you do to ensure that your server state is correct
What safeties do these automated tests provide?
Perform complex changes or introduce new features without breaking existing behavior (e.g., it still works in RHEL-based distributions after adding support for
Molecule helps develop roles using tests. The tool can even initialize a new role with test cases: molecule init role
The
These tools prevent dependencies and conflicts between Molecule and other Python packages in your machine.
https://opensource.com/article/18/12/testing-ansible-roles-molecule
- How To Test Ansible Roles with Molecule on Ubuntu 16.04
molecule: This is the main Molecule package
docker: This Python library
source
python -
The -
molecule init role -
cd
Test the default role to check if Molecule has
molecule test
pay attention to the PLAY_RECAP for each
Using
The
flake8: This Python code linter checks tests created for
In this article you created an Ansible role to install and configure Apache and
You then wrote unit tests with
automate testing using a CI pipeline
https://www.digitalocean.com/community/tutorials/how-to-test-ansible-roles-with-molecule-on-ubuntu-16-04
- Testing Ansible Roles with Molecule
–
– Molecule
Molecule support multiple virtualization providers, including Vagrant, Docker, EC2, GCE, Azure, LXC, LXD, and OpenStack.
molecule init role -d
This will not only create files and directories needed for testing, but the whole Ansible role tree, including all directories and files to get started with a new role.
I
With the init command you can also set the verifier to
The default is
Molecule uses Ansible to provision the containers for testing.
It creates automatically playbooks to prepare, create and delete those containers.
By default cents:7 is the only platform used to test the role.
https://blog.netways.de/2018/08/16/testing-ansible-roles-with-molecule/
- The first one is
serverspec , which allows running BDD-like tests against local or remote servers or containers.
Like many tools nowadays,
what’s cool I actually can do that semi-automatically by asking
To check if that’s
both
https://codeblog.dotsandbrackets.com/unit-test-server-goss/
- Goss files are YAML or JSON files describing the tests you want to run to validate your system.
https://velenux.
- Goss is a YAML based
serverspec alternative tool for validating a server’s configuration.
The project also includes
Pre-requisites
You’ll also need a Docker image under development.
https://circleci.com/blog/testing-docker-images-with-circleci-and-goss/
- Goss is a YAML based
serverspec alternative tool for validating a server’s configuration. It easesthe process of writing tests by allowing the user to generate tests from the current system state. Once the test suiteis written they can be executed ,waited-on , or served as a health endpoint
Docker 1.12 introduced the concept of HEALTHCECK, but only allows you to run one command. By letting Goss manage your checks. The
https://medium.com/@aelsabbahy/docker-1-12-kubernetes-simplified-health-checks-and-container-ordering-with-goss-fa8debbe676c
- Goss is a YAML based
serverspec alternative tool for validating a server’s configuration
The project also includes
Once inside the running Docker image, you can explore different tests, which the Goss command will automatically append to a
https://circleci.com/blog/testing-docker-images-with-circleci-and-goss/
Originated in Chef community
Very pluggable on all levels
Implemented in Ruby
Configurable through simple single
"Your infrastructure deserves tests too."
What is Kitchen?
Kitchen provides a test harness to execute infrastructure code on one or more platforms in isolation.
http://kitchen.ci/
- We've just installed
ChefDK , VirtualBox, and Vagrant. The reason we have done so is that the default driver for test-kitchen iskitchen -vagrant which uses Vagrant to create, manage, and destroy local virtual machines. Vagrant itself supportsmany different hypervisors and clouds but forthe purposes of this exercise, we are interested in the default local virtualization provided by VirtualBox.
Kitchen is modular so that one may use a variety of different drivers (Vagrant, EC2, Docker), provisioners (Chef, Ansible, Puppet), or verifiers (InSpec,
https://kitchen.ci/docs/getting-started/installing
- Test Kitchen is
Chef’s integrated testing framework. It enables writing test recipes, which will run on the VMs once theyare instantiated and converged using the cookbook. The test recipes run on that VM and can verify if everything works as expected.
Provisioner − It provides specification on how Chef runs. We are using
https://www.tutorialspoint.com/chef/chef_test_kitchen_setup.htm
- Chef - Test Kitchen and Docker
Let’s create a simple application cookbook for a simple
Switch to the user
$ cd ~/chef-repo
$ chef generate app c2b2_website
This generates the following folder structure which includes a top level Test Kitchen instance for testing the cookbook.
https://www.c2b2.co.uk/middleware-blog/chef-test-kitchen-and-docker.php
- Detect and correct compliance failures with Test Kitchen
Knowing where your systems are potentially out of compliance
The second phase, correct, involves remediating the compliance failures you've identified in the first phase. Manual correction doesn't scale. It's also tedious and error-prone.
To help ensure that the automation code you write will place their systems in the desired state, most teams apply their code to test systems before they apply that code to their production systems. We call this process local development.
you'll use Test Kitchen to detect and correct issues using temporary infrastructure
You'll use Test Kitchen to run
You'll continue to use the dev-sec/
By creating local test infrastructure that resembles your production systems, you can detect and correct compliance failures before you deploy changes to production.
For this module, you need:
Docker.
the Chef Development Kit, which includes Test Kitchen.
Bring up a text editor
Atom
Visual Studio Code
Sublime Text
The initial configuration specifies a configuration that uses Vagrant to manage local VMs on
The driver section specifies the software that manages test instances. Here, we specify kitchen-
The platforms section specifies to use the
2. Detect potential
Run kitchen verify now to run the dev-sec/
Limit testing to just the
In
The control tests whether the
Write the output as JSON
let's format the results as JSON and write the report to disk.
Run kitchen verify to generate the report.
Remember, if you don't have the
3. Correct the failures
In this part, you write Chef code to correct the failures. You do so by installing the
4. Specify the
In this part, you
Create the Chef template
A template enables you to generate files.
you can include placeholders in a template file that
run the following chef generate template command to create a template named
chef generate template
Update the default recipe to use the template
To do a complete Test Kitchen run, you can use the kitchen test command. This command:
destroys any existing instances (kitchen destroy).
brings up a new instance (kitchen create).
runs Chef on your instance (kitchen converge).
runs
destroys the instance (kitchen destroy).
https://learn.chef.io/modules/detect-correct-kitchen#/
- Testing Ansible Roles With
KitchenCI
What is
kitchen verify
Docker API and so on. So you
http://www.briancarpio.com/2017/08/24/testing-ansible-roles-with-kitchenci/
- Ansible Galaxy
– Testing Roles with Test Kitchen
To get started we will need a handful of dependencies:
A working Python install with Ansible installed
A working Ruby install with bundler installed
Docker installed and running. Please see install instructions.
The most important thing at the moment is the
Some shortcomings
The spec pattern is being used here to workaround a path issue with where the verifier is looking for spec files. This means the spec files matching spec/*_spec
Using the
https://blog.el-chavez.me/2016/02/16/ansible-galaxy-test-kitchen/
- add Test Kitchen to your
Gemfile inside your Chef Cookbook
kitchen-vagrant is a "Test Kitchen driver" - it tells test-kitchen how to interact with an appliance (machine), such as Vagrant, EC2, Rackspace,
run the bundle command to install
bundle install
verify
bundle exec kitchen help
By default, Test Kitchen will look in the test/integration directory for your test files. Inside there will be a folder, inside the folder a
https://github.com/test-kitchen/test-kitchen/wiki/Getting-Started
- Using Test Kitchen with Docker and
serverspec to test Ansible roles
we are using the test kitchen framework and
With test kitchen we can start
When the Ansible role
Ideally you want to execute this every time when
gem install test-kitchen
kitchen init
We update the
gem "kitchen-
bundle install
set some version restrictions in this file
install test-kitchen with version 1.4.0 or higher
gem 'test-kitchen', '>= 1.4.0'
gem 'kitchen-docker', '>= 2.3.0'
gem 'kitchen-
We remove the
The following example is the “suites” part of the dj-wasabi
suites:
- name:
provisioner:
name:
playbook: test/integration/
- name:
provisioner:
name:
playbook: test/integration/
There are 2 suits with their own playbooks.
There are 2 suits with their own playbooks. In the above case, there is
We now create our only playbook
---
- hosts:
roles:
- role:
We have configured the playbook
But we are not there yet, we will
create the following directory: test/integration/default/
create the
you’ll create
Install bundler and test-kitchen and run bundle install
Execute kitchen test
Installation of the gem
1st build step:
gem install bundler
gem install test-kitchen
bundle install
And 2nd build step:
kitchen test
https://werner-dijkerman.nl/2015/08/20/using-test-kitchen-with-docker-and-serverspec-to-test-ansible-roles/
- Testing Ansible with Kitchen
Using kitchen we can automate the testing of our configuration management code across a variety of platforms using Docker as a driver
https://www.tikalk.com/posts/2017/02/21/testing-ansible-with-kitchen/
- Prerequisite Tools/Packages
Install Vagrant
Initializing an Ansible Role
Initializing kitchen-
A great tool for this is
Using Vagrant
The benefits of using Vagrant as the provider is that
Using Docker
Docker has the benefit of being significantly more lightweight than Vagrant. Docker doesn't build an entire virtual machine but
Another draw-back with Docker development is that some functionality
Initialize a Docker-based test-kitchen project
kitchen init
Initialize a Vagrant-based test-kitchen project
kitchen init
Initializing an Ansible Role
Initializing kitchen-
kitchen init
https://tech.superk.org/home/ansible-role-development
ServerSpec is a framework that gives youRSpec tests for your infrastructure. Test-kitchen’sbusser pluginutilizes busser -serverspec for executingServerSpec tests.
https://kitchen.ci/docs/verifiers/serverspec/
- With
Serverspec , you can writeRSpec tests for checking your serversare configured correctly.
the true aim of
http://serverspec.org/
- What is
Serverspec ?
https://damyanon.net/post/getting-started-serverspec/
- Testing infrastructure with
serverspec
Advanced use⚓︎
Out of the box,
By default,
Parallel execution
By default,
This does not scale well if you have dozens or hundreds of hosts to test.
For each task, set
Add a task to collect the generated JSON files into a single report.
https://vincent.bernat.ch/en/blog/2014-serverspec-test-infrastructure
- Writing Efficient Infrastructure Tests with
Serverspec
One of the core tenants of infrastructure as code is testability; your infra code should
https://www.singlestoneconsulting.com/articles/writing-efficient-infrastructure-tests-with-serverspec
- Monitoring vs. Spec
Keep your system up &
https://www.netways.de/fileadmin/images/Events_Trainings/Events/OSDC/2014/Slides_2014/Andreas_Schmidt_Testing_server_infrastructure_with_serverspec.pdf
- What is
Serverspec ?
https://stelligent.com/2016/08/17/introduction-to-serverspec-what-is-serverspec-and-how-do-we-use-it-at-stelligent-part1/
- RSpec is a unit test framework for the Ruby programming language. RSpec
is different than traditionalxUnit frameworks likeJUnit because RSpec is a Behavior driven development tool. What this means is that, tests written in RSpec focus on the "behavior" of an application being tested. RSpec does not put emphasis on, how the application works butinstead on how it behaves, in other words, what the applicationactually does.
https://www.tutorialspoint.com/rspec/index.htm
Serverspec is the name of a Ruby tool which allows you to write simple tests, to validate that a serveris correctly configured .
the
With this
a top-level
https://debian-administration.org/article/703/A_brief_introduction_to_server-testing_with_serverspec
- Automated server testing with
Serverspec , output for Logstash, results in Kibana
Configure the VMs with some config management tool (Puppet, Chef, etc)
Perform functional testing of VMs with
Output logs that
Visualise output in Kibana
Install and set up a
$ gem install
$
$ cd /opt/
$
This will have created you a basic directory structure with some files to get you started. Right now we have:
$
spec/
www
http://annaken.github.io/automated-testing-serverspec-output-logstash-results-kibana
- RAKE
– Ruby Make
Rake is a Make-like program implemented in Ruby.
https://ruby.github.io/rake/
- Getting Started with Rake
Rake is an Embedded Domain Specific Language (EDSL) because, beyond the walls of Ruby, it has no existence. The term EDSL suggests that Rake is a domain-specific language that
Rake extends Ruby, so you can use all the features and extensions that come with Ruby.
You can take advantage of Rake by using it to automate some tasks that have been continually challenging you.
https://www.sitepoint.com/rake-automate-things/
- Using Rake to
ServerSpec test
If you don't like kitchen, or your team is using Rake you may
The directory layout is simpler than Kitchen, but requires more configuration as you need to create the following files;
Provisioning script for the Vagrant VMs
http://steveshilling.blogspot.com/2017/05/puppet-rake-serverspec-testing.html
You may want to run maintenance tasks, periodic calculations, or reporting in your production environment, while in development,you may want to trigger your full test suite to run.
The rake gem is Ruby’s most widely accepted solution for performing these types of tasks.
Rake is a ‘ruby build
http://tutorials.jumpstartlab.com/topics/systems/automation.html
- This log can now
be collected by Logstash, indexed by Elasticsearch, and visualised with Kibana.
Extending
I needed a resource provider, that could check
Our
Without custom resource types, this is not possible, as you sometimes cannot expect
http://arlimus.github.io/articles/custom.resource.types.in.serverspec/
- Resource Types
In these examples, I'm using should syntax instead of expect syntax because I think should syntax is more readable than expect syntax and I like it.
Using expect
http://burtlo.github.io/serverspec.github.io/resource_types.html
- Testing Ansible with Kitchen
Install
https://www.tikalk.com/posts/2017/02/21/testing-ansible-with-kitchen/
- Writing
Sensu Plugin Tests with Test-Kitchen andServerspec
My simple heuristic is unit tests for libraries, integration tests for plugins.
Install Dependencies
For plugins we have standardized our integration testing around the following tools:
Test-kitchen: Provides a framework for developing and testing infrastructure code and software on isolated platforms.
Kitchen-docker: Docker driver for test-kitchen to allow use in a more lightweight fashion than traditional virtualization such as vagrant +
Setting Up Your Framework
What platforms do I want to test? For example
What versions of languages do I want to test? For example ruby2.1, 2.2, 2.3.0, 2.4.0
What driver you will use for test-kitchen?
For
kitchen-vagrant (
kitchen-
kitchen-ec2 (convenient but costs money).
https://dzone.com/articles/writing-sensu-plugin-tests-with-test-kitchen-and-s
- Integration Testing Infrastructure As Code With Chef, Puppet, And
KitchenCI
Integration Testing
Basically, we want to automate the manual approach we used to verify if
One very popular approach is
RSpec is a testing tool for the Ruby programming language. Born under the banner of Behaviour-Driven Development, it is designed to make Test-Driven Development a productive and enjoyable experience
Rspec is a Domain Specific Language for Testing. And there is an even better matching candidate: Serverspec
With serverspec, you can write RSpec tests for checking your servers are configured correctly.
With serverspec, you can write RSpec tests for checking your servers are configured correctly.
Serverspec supports a lot of resource types out of the box. Have a look at Resource Types.
This is agnostic to the method we provisioned our server! Manual, Chef, Puppet, Saltstack, Ansible, … you name it.
To be able to support multiple test suites lets organize them in directories and use a Rakefile to choose which suite to run.
Converge The Nodes
Now its time to provide some infrastructure-as-code to be able to converge any node to our specification
You can find this in the repo tests-kitchen-example
https://github.com/ehaselwanter/tests-kitchen-example
The Puppet Implementation
https://forge.puppet.com/puppetlabs/ntp
The Chef Implementation
https://supermarket.chef.io/cookbooks/ntp
Don’t Repeat Yourself In Integration Testing
Now we are able to converge our node with Chef or Puppet, but we still have to run every step manually. It’s time to bring everything together. Have Puppet as well as Chef converge our node and verify it automatically.
Test-Kitchen must be made aware that we already have our tests somewhere, and that we want to use them in our Puppet as well as Chef integration test scenario.
http://ehaselwanter.com/en/blog/2014/06/03/integration-testing-infrastructure-as-code-with-chef-puppet-and-kitchenci/
- Test Kitchen and Jenkins
The most recent thing I’ve done is set up a Jenkins build server to run test-kitchen on cookbooks.
The cookbook, kitchen-jenkins is available on the Chef Community site
https://supermarket.chef.io/cookbooks/kitchen-jenkins
http://jtimberman.housepub.org/blog/2013/05/08/test-kitchen-and-jenkins/
- We started out by running kitchen tests locally, on our development machines, agreeing to make sure the tests passed every time we made changes to our cookbooks.
we had decided to build our test suite on a very interesting tool called Leibniz,
This is basically a glue layer between cucumber and test kitchen, and it enabled us to develop our infrastructure using the Behavior Driven Development approach that we are growingly familiar with.
a Jenkins build that automatically runs all of our infrastructure tests and is mainly based on the following tools:
Test Kitchen, automating the creation of test machines for different platforms
Vagrant, used as a driver for Test Kitchen, is in charge of actually instantiating and managing the machine’s state
Chef, used to provision the test machine bringing it into the desired state, so it can be tested as necessary
Libvirt, the virtualization solution that we adopted for the test machines
how to setup a Jenkins build to run Kitchen tests using Vagrant and libvirt.
In our setup we used two separate machines: one is a VM running Jenkins and one is a host machine in charge of hosting the test machines.
Install Vagrant on the host machine
Vagrant plugins
In order to use libvirt as virtualization solution for the test VMs, a few Vagrant plugins are necessary
vagrant-libvirt adds a Libvirt provider to Vagrant
vagrant-mutate Given the scarce availability of Vagrant boxes for Libvirt, this plugin allows to adapt boxes originally prepared for other providers (e.g. Virtualbox) to Libvirt
Ruby environment
This is an optional step, but it is highly recommended as it isolates the ruby installation used by this build from the system ruby and simplifies maintenance as well as troubleshooting.
Install the rbenv Jenkins plugin
It can be used to instruct the Jenkins build to use a specific rbenv instead of the system’s one. This plugin can be easily installed using Jenkins’ plugin management interface.
Configure the Jenkins build
add a build step of type ‘Execute shell’ to install the ruby dependencies:
cd /path/to/cookbook_s_Gemfile;
bundle install;
Prepare Vagrant boxes
you can download a Debian 8.1 box originally prepared for virtualbox with the following command
vagrant box add ospcode-debian-8.1 box_url
where box_url should be updated to point to a valid box url (the boxes normally used by Test Kitchen can be found here)
https://github.com/chef/bento
it can be adapted for Libvirt like this
vagrant mutate ospcode-debian-8.1 libvirt
Configure Kitchen to use Libvirt
By default, Test Kitchen will try to use virtualbox as provider and will bail out if it does not find it
The actual tests
we started out test suite using Leibniz.
we eventually decided to switch to something else. Our choice was first BATs tests and then Serverspec.
Serverspec is, as of today, our framework of choice for testing our infrastructure with its expressive and comprehensive set of resource types.
Troubleshooting
two environment variables can be used that instruct respectively test kitchen and Vagrant to be more verbose about their output:
export KITCHEN_LOG=‘debug’
export VAGRANT_LOG=‘debug’
we will have to find a way to instruct kitchen to, in turn, instruct serverspec to produce JUnit-style test reports that Jenkins can easily parse.
An additional improvement can take advantage of the possibility of creating custom packer boxes that have basic common software and configuration already pre-installed.
This can noticeably speed up the time to prepare the test machines during our builds.
Furthermore, a possible performance bump can be obtained by caching as much as possible of the resources that each test machine downloads every single time the tests run, like software update packages, gems and so on.
the vagrant-cachier plugin for Vagrant looks like the perfect tool for the job.
http://www.agilosoftware.com/blog/configuring-test-kitchen-on-jenkins/
- Getting Started Writing Chef Cookbooks the Berkshelf Way, Part 3
Test Kitchen is built on top of vagrant and supplements the Vagrantfile file you have been using so far in this series to do local automated testing.
Iteration #13 - Install Test Kitchen
Edit myface/Gemfile and add the following lines to load the Test Kitchen gems
gem 'test-kitchen'
gem 'kitchen-vagrant'
After you have updated the Gemfile run bundle install to download the test-kitchen gem and all its dependencies
Iteration #14 - Create a Kitchen YAML file
In order to use Test Kitchen on a cookbook, first you need to add a few more dependencies and create a template Kitchen YAML file. Test Kitchen makes this easy by providing the kitchen init command to perform all these initialization steps automatically
$ kitchen init
create .kitchen.yml
append Thorfile
create test/integration/default
append .gitignore
append .gitignore
append Gemfile
append Gemfile
You must run 'bundle install' to fetch any new gems.
Since kitchen init modified your Gemfile, you need to re-run bundle install (as suggested above) to pick up the new gem dependencies:
Most importantly, this new bundle install pass installed the kitchen-vagrant vagrant driver for Test Kitchen.
Everything in the YAML file should be straightforward to understand, except perhaps the attributes item in the suites stanza. These values came from the Vagrantfile we used in the previous installments of this series
For example, you can assign a host-only network IP so you can look at the MyFace website with a browser on your host. Add the following network: block to a platform’s driver_config::
Testing Iteration #14 - Provision with Test Kitchen
The Test Kitchen equivalent of the vagrant up command is kitchen converge.
Try running the kitchen converge command now to verify that your .kitchen.yml file is valid. When you run kitchen converge it will spin up a CentOS 6.5 vagrant test node instance and use Chef Solo to provision the MyFace cookbook on the test node:
Iteration #16 - Writing your first Serverspec test
It’s helpful to know that Test Kitchen was designed as a framework for post-convergence system testing.
You are supposed to set up a bunch of test instances, perform a Chef run to apply your cookbook’s changes to them, then when this is process is complete your tests can inspect the state of each test instance after the Chef run is finished.
http://misheska.com/blog/2013/08/06/getting-started-writing-chef-cookbooks-the-berkshelf-way-part3/
- A Test Kitchen Driver for Vagrant.
This driver works by generating a single Vagrantfile for each instance in a sandboxed directory. Since the Vagrantfile is written out on disk, Vagrant needs absolutely no knowledge of Test Kitchen. So no Vagrant plugins are required.
https://github.com/opscode/kitchen-vagrant/blob/master/README.md
- Docker Driver (kitchen-docker)
Chef Training Environment Setup
you’ll need to spin up a virtual machine with Docker installed in order to play around with a container environment.
We’ve created a Chef training environment that has Docker and the Chef Development Kit used in this book preinstalled on a Linux virtual machine.
It’s also a handy environment for playing around with containers using Test Kitchen.
make sure you install Vagrant and VirtualBox or Vagrant and VMware.
Create a directory for the Chef training environment project called chef and make it the current directory.
mkdir chef
cd chef
Add Test Kitchen support to the project using the default kitchen-vagrant driver by running kitchen init
kitchen init --create-gemfile
Then run bundle install to install the necessary gems for the Test Kitchen driver.
bundle install
Run kitchen create to spin up the image:
Then run kitchen login to use Docker
You will be running the Test Kitchen Docker driver inside this virtual machine
kitchen-docker Setup
Run the following kitchen init command to add Test Kitchen support to your project using the kitchen-docker driver:
$ kitchen init --driver=kitchen-docker --create-gemfile
Physical Machine Drivers
Until Test Kitchen supports chef-metal, the only way to use Test Kitchen with physical machines currently (other than your local host) is to use the kitchen-ssh driver. This is actually a generic way to integrate any kind of machine with Test Kitchen, not just physical machines. As long as the machine accepts ssh connections, it will work.
http://misheska.com/blog/2014/09/21/survey-of-test-kitchen-providers/#physical-machine-drivers
- InSpec framework for testing and auditing your applications and infrastructure. It can be utilized for validating test-kitchen instance via thekitchen-inspec plugin.
https://kitchen.ci/docs/verifiers/inspec/
- InSpec Tutorial: Day 1 - Hello World
I want to start a little series on InSpec to gain a fuller understanding, appreciation for, and greater flexibility with Compliance.
In reality, however, the built-in Compliance profiles will get you to 80% of what you need, and then you’ll want to add or modify a bunch of other specific tests from the profiles to meet the other 20% of your needs
It’s possible that you’re part of a company, perhaps without a dedicated security team, that uses Chef Compliance from within Chef Automate. And it’s possible that you’re totally content to run scans off of the premade CIS profiles and call it a day. That’s a huge selling point of Compliance.
In reality, however, the built-in Compliance profiles will get you to 80% of what you need, and then you’ll want to add or modify a bunch of other specific tests from the profiles to meet the other 20% of your needs.
If you already have the updated versions of Homebrew, Ruby, and InSpec, then skip ahead
It’s preferable to use the InSpec that comes with the ChefDK, but if you’re not using ChefDK otherwise, feel free to use the standalone version of InSpec.
http://www.anniehedgie.com/inspec-basics-1
- InSpec: Inspect Your Infrastructure
InSpec is an open-source testing framework for infrastructure with a human- and machine-readable language for specifying compliance, security and policy requirements
https://github.com/inspec/inspec#installation
- Compliance as Code: An Introduction to InSpec
Another aspect of its accessibility is that, while InSpec is owned by Chef, it’s completely platform agnostic,
Now, imagine that you can put the link to a stored InSpec profile where it says test.rb. If you have InSpec installed on your machine, then you can run either of these commands right now using a profile stored on the Chef Supermarket to verify that all updates have been installed on a Windows machine.
# run test stored on Github locally
inspec exec https://github.com/dev-sec/windows-patch-baseline
# run test stored on Github on remote windows host on WinRM
inspec exec https://github.com/dev-sec/windows-patch-baseline -t winrm://Administrator@windowshost --password 'your-password'
Now, imagine putting those commands in a CI/CD pipeline and using them across all of your environments<.
https://www.10thmagnitude.com/compliance-code-introduction-inspec/
- kitchen-inspec
Use InSpec as a Kitchen verifier with kitchen-inspec.
Add the InSpec verifier to the .kitchen.yml file:
https://www.inspec.io/docs/reference/plugin_kitchen_inspec/
- Try InSpec
InSpec is an open-source testing framework by Chef that enables you to specify compliance, security, and other policy requirements.
InSpec is code. Built on the Ruby programming language
InSpec can run on Windows and many Linux distributions. Although you can use InSpec to scan almost any system
Detect and correct
You can think of meeting your compliance and security goals as a two-phase process. We often refer to this process as detect and correct.
The first phase, detect, is knowing where your systems are potentially out of compliance or have potential security vulnerabilities.
The second phase, correct, involves remediating the compliance failures you've identified in the first phase.
After the correct process completes, you can run the detect process a second time to verify that each of your systems meets your policy requirements.
Although remediation can happen manually, you can use Chef or some other continuous automation framework to correct compliance failures for you.
This module focuses on the detect phase.
With InSpec, you can generate reports that prove your systems are in compliance in much less time.
1. Install Docker and Docker Compose
The installation is a set of containers orchestrated with Docker Compose, a tool for defining and running multi-container Docker applications.
The setup includes two systems – one that acts as your workstation and a second that acts as the target
3. Detect and correct manually
Let's say you require auditd, a user-space tool for security auditing, to be installed on each of your systems.
You'll first verify that the package is not installed on your workstation container and then manually install the package
Phase 1: detect
the first phase of meeting your compliance goals is to detect potential issues.You can see that auditd is not installed.
Phase 2: correct
To correct this issue, run the following apt-get install command to install the auditd package.
Although this is an elementary example, you may notice some potential problems with this manual approach
It's not portable.
dpkg and apt-get are specific to Ubuntu and other Debian-based systems.
auditd is called audit on other Linux distributions.
It's not documented.
You need a way to document the requirements and processes in a way others on your team can use.
It's not verifiable.
You need a way to collect and report the results to your compliance officer consistently.
4. Detect using InSpec
An InSpec test is called a control. Controls are grouped into profiles. Shortly, you'll download a basic InSpec profile we've created for you.
4.1. Explore the InSpec CLI
you typically run InSpec remotely on your targets, or the systems you want to monitor.
InSpec works over the SSH protocol when scanning Linux systems, and the WinRM protocol when scanning Windows systems.
Your auditd profile checks whether the auditd package is installed. But there are also other aspects of this package you might want to check. For example, you might want to verify that its configuration:
specifies the correct location of the log file.
incrementally writes audit log data to disk.
writes a warning to the syslog if disk space becomes low.
suspends the daemon if the disk becomes full.
You can express these requirements in your profile.
5.1. Discover community profiles
You can browse InSpec profiles on Chef Supermarket, supermarket.chef.io/tools. You can also see what's available from the InSpec CLI.
you know that Chef Supermarket is a place for the community to share Chef cookbooks.
You can also use and contribute InSpec profiles through Chef Supermarket.
You can run inspec supermarket info to get more info about a profile. As you explore the available profiles, you might discover the dev-sec/linux-baseline profile.
If you browse the source code for the dev-sec/linux-baseline profile, you would see that this profile provides many other commonly accepted hardening and security tests for Linux.
Automating correction
In practice, you might use Chef or other continuous automation software to correct issues. For example, here's Chef code that installs the auditd package if the package is not installed
Chef Automate also comes with a number of compliance profiles, including profiles that implement many DevSec and CIS recommendations.
You detected issues both by running InSpec locally and by running InSpec remotely on a target system.
You downloaded a basic InSpec profile and used a community profile from Chef Supermarket.
You limited your InSpec runs to certain controls and formatted the output as JSON so you can generate reports.
You packaged your profile to make it easier to distribute.
https://learn.chef.io/modules/try-inspec#/
- A Test-Kitchen provisioner takes care of configuring the compute instance provided by the driver. This is most commonly a configuration management framework like Chef or the Shell provisioner, both of which are included in test-kitchen by default.
https://kitchen.ci/docs/provisioners/
- A Test-Kitchen driver is what supports configuring the compute instance that is used for isolated testing. This is typically a local hypervisor, hypervisor abstraction layer (Vagrant), or cloud service (EC2).
https://kitchen.ci/docs/drivers/
- A Kitchen Instance is a combination of a Suite and a Platform as laid out in your .kitchen.yml file.
Kitchen has auto-named our only instance by combining the Suite name ("default") and the Platform name ("ubuntu-16.04") into a form that is safe for DNS and hostname records, namely "default-ubuntu-1604"
https://kitchen.ci/docs/getting-started/instances/
- Kitchen is modular so that one may use a variety of different drivers (Vagrant, EC2, Docker), provisioners (Chef, Ansible, Puppet), or verifiers (InSpec, Serverspec, BATS) but for the purposes of the guide we’re focusing on the default “happy path” of Vagrant with VirtualBox, Chef, and InSpec.
https://kitchen.ci/docs/getting-started/installing/
- Bats: Bash Automated Testing System
https://github.com/sstephenson/bats
- Extending the Ansible Test Kitchen tests with BATS tests
Sometimes this isn’t enough to validate your setup, BATS is the answer.
we created the directory structure: test/integrations/default. And in this directory we created an directory named serverspec.
In this “default” directory we also create an directory named bats
https://werner-dijkerman.nl/2016/01/15/extending-the-ansible-test-kitchen-tests-with-bats-tests/
- One of the advantages of kitchen-inspec is that the InSpec tests are executed from the host over the transport (SSH or WinRM) to the instance. No tests need to be uploaded to the instance itself.
https://kitchen.ci/docs/getting-started/running-verify/
- Each instance has a simple state machine that tracks where it is in its lifecycle. Given its current state and the desired state, the instance is smart enough to run all actions in between current and desired
https://kitchen.ci/docs/getting-started/adding-platform/
- Install Chef Development Kit
Create A Repo
Test Driven Workflow
https://devopscube.com/chef-cookbook-testing-tutorial/
- Test kitchen is Chef’s integration testing framework. It enables writing tests, which run after VM is instantiated and converged using the cookbook. The tests run on VM and can verify that everything works as expected.
https://www.tutorialspoint.com/chef/chef_testing_cookbook_with_test_kitchen.htm
- First, Test-Kitchen is packaged as a RubyGem. You'll also need to install Git. To make VM provisioning easy, I'll be using Vagrant so we'll need that as well. And finally, use VirtualBox as a Test-Kitchen provider that will actually bring up the VMs. Once you've got each of these tools installed, you can proceed. Test-Kitchen can be run on Windows, Linux or MacOS
http://www.tomsitpro.com/articles/get-started-with-test-kitchen,1-3434.html
- Test Kitchen should be thought of as TDD for infrastructure
With the emergence of ‘infrastructure as code’, the responsibility for provisioning infrastructure is no longer the domain of a system administrator alone.
This is where Test Kitchen comes in – as the glue between provisioning tools e.g. Chef, Puppet or Ansible, the infrastructure being provisioned, e.g. AWS, Docker or VirtualBox and the tests to validate the setup is correct.
Historically, Test Kitchen is used to test single cookbooks or packages, but is easily adapted to test the group of cookbooks that make up your environment.
The concept behind Test Kitchen is that it allows you to provision (convergence testing) an environment on a given platform (or platforms) and then execute a suite of tests to verify the environment has been set up as expected.
This can be particularly useful if you want to verify a setup against different operating systems (OS) and/or OS package versions. This can even be set up as part of your Continuous Integration (CI) and/or delivery pipelines, and also feeds nicely into the concept of ‘immutable infrastructure’
The Provisioner section defines what you want to use to converge the environment. Chef Solo/Zero and shell provisioners are the easiest to get started with, but there are provisioners also available for Puppet and Ansible
the Test Suites section is where the actual value comes into play. This is where you define the tests to run against each platform
In more complex setups, it is possible to set the driver on a per-test or platform basis to allow you to run against different virtualisation platforms.
The easiest to get running is to write simple Bash script tests using Bats.
If you require more complex tests, then Serverspec might be a better fit since it has a far richer domain for writing platform-agnostic tests.
To make connecting to a provisioned environment easier, Test Kitchen provides a login command so you don’t have to worry about figuring out ports and SSH keys
After you have Test Kitchen in place, you could set up a tool such as Packer to generate your infrastructure VMs with confidence it will work. All this together would put you in a good position to implement a successful CD pipeline.
http://www.techinsight.io/review/devops-and-automation/testing-infrastructure-with-test-kitchen/- ChefDK
First, install the ChefDK. This package includes Chef, Kitchen, Berkshelf, and a variety of useful tools for the Chef ecosystem.
Kitchen is modular so that one may use a variety of different drivers (Vagrant, EC2, Docker), provisioners (Chef, Ansible, Puppet), or verifiers (InSpec,
https://kitchen.ci/docs/getting-started/installing
- Integration Testing for Chef-Driven Infrastructure with Test Kitchen
To start writing Serverspec tests, you need to create a directory named integration inside the test directory, where all the integration tests live. This directory should contain a subdirectory for each testing framework we'll use, which means that you are able to use many testing frameworks as you want on the same suite, without any collision.
We intend to use serverspec as the framework of choice, so let's create a corresponding directory structure for it:
The first thing we need is a Ruby helper script which loads Serverspec and sets the general configuration options, like the path used by the binary while executing tests.
With Serverspec, there is a much nicer way to check specific resources like commands.
Serverspec can also check the status of packages, which we can use in this case, since our example used packages provided by our distribution of choice.
In the RSpec manner, every test should include the standard describe / it structure for defining test cases
Because the instance is already converged in the step above, you can easily apply new changes to it and check test results.
Integration tests for servers managed by Chef
Test Kitchen as a test runner for virtual machines, backed by Vagrant and VirtualBox
The actual test will be written in Ruby using the Serverspec testing framework.
installing Vim from source
test two scenarios, one that includes the recipe for installing Vim from packages, and one that will install it from the source.
Test Kitchen is a test automatization tool distributed with ChefDK.
It manages virtual machines, internally called nodes
When running integration tests, you can and should use the same Chef configuration as the one you run on a real server - the same run list, recipes, roles and attributes.
Optionally, you can provide custom attributes used only in the test environment, like fake data for example.
A Node can be represented with any type of virtualization via Test Kitchen's plugins, called drivers.
https://semaphoreci.com/community/tutorials/integration-testing-for-chef-driven-infrastructure-with-test-kitchen
- InSpec is compliance as code
https://www.inspec.io/
- Why Use Packer?
Super fast infrastructure deployment. Packer images allow you to launch completely provisioned and configured machines in seconds, rather than several minutes or hours
Multi-provider portability. Because Packer creates identical images for multiple platforms, you can run production in AWS, staging/QA in a private cloud like OpenStack, and development in desktop virtualization solutions such as VMware or VirtualBox. Each environment is running an identical machine image, giving ultimate portability.
Improved stability. Packer installs and configures all the software for a machine at the time the image is built. If there are bugs in these scripts, they'll be caught early, rather than several minutes after a machine is launched.
Greater testability. After a machine image is built, that machine image can be quickly launched and smoke tested to verify that things appear to be working
https://www.packer.io/intro/why.html
- With this builder, you can repeatedly create Docker images without the use of a Dockerfile. You don't need to know the syntax or semantics of Dockerfiles.
- Testing Packer builds with Serverspec
Lately, I’ve been working on building base AMIs for our infrastructure using Packer, and verifying these images with Serverspec. In the opening stages my workflow looked like:
Build AMI with Packer
Launch instance based on AMI
Run Serverspec tests against an instance
This works fine, and could potentially be converted into a Jenkins pipeline
My preferred pipeline would look like:
Build AMI with Packer
As the final build step, run Serverspec
If the tests fail, abort the build
http://annaken.github.io/testing-packer-builds-with-serverspec
- Continuous Delivery
As part of this pipeline, the newly created images can then be launched and tested, verifying the infrastructure changes work. If the tests pass, you can be confident that the image will work when deployed. This brings a new level of stability and testability to infrastructure changes.
Dev/Prod Parity
Packer helps keep development, staging, and production as similar as possible. Packer can be used to generate images for multiple platforms at the same time. So if you use AWS for production and VMware (perhaps with Vagrant) for development, you can generate both an AMI and a VMware machine using Packer at the same time from the same template.
Appliance/Demo Creation
Since Packer creates consistent images for multiple platforms in parallel, it is perfect for creating appliances and disposable product demos. As your software changes, you can automatically create appliances with the software pre-installed
https://www.packer.io/intro/use-cases.html
- In the previous page of this guide, you created your first image with Packer. The image you just built, however, was basically just a repackaging of a previously existing base AMI. The real utility of Packer comes from being able to install and configure software into the images as well. This stage is also known as the provision step. Packer fully supports automated provisioning in order to install software onto the machines prior to turning them into images.
This way, the image we end up building actually contains Redis pre-installed.
https://www.packer.io/intro/getting-started/provision.html
- Parallel Builds
https://www.packer.io/intro/getting-started/parallel-builds.html
- Vagrant Boxes
Post-processors are a generally very useful concept. While the example on this getting-started page will be creating Vagrant images, post-processors have many interesting use cases. For example, you can write a post-processor to compress artifacts, upload them, test them, etc.
https://www.packer.io/intro/getting-started/vagrant.html
- Build, Test, and Automate Server Image Creation
Problem
Spin up an EC2 instance using a custom base image
Run an Ansible playbook to provision the instance
Do a manual sanity check for all the services running on the instance
building the server image is slow, and there are no checks in place to test the image before use
Additionally, there were no provisioning logs for debugging the server image, which made it difficult for our Operations Engineers to troubleshoot
Packer is a tool for creating server images for multiple platforms.
It is easy to use and automates the process of creating server images. It supports multiple provisioners, all built into Packer.
Challenges with Packer
as Packer uses Ansible “local” provisioner which requires the playbook to be run locally on the server being provisioned. Our playbooks always assume the servers are being provisioned remotely, which led problems because some playbooks are shared across projects and require very specific path inclusions. These playbooks had to be rewritten in a way that can work with Packer.
While using the Ansible provisioner we found that the playbooks that were written using “roles” could be easily integrated with Packer. Some of our Ansible playbooks are shared across various projects
Provision Logs
During the provisioning process, the logs that are generated by Ansible are stored on the server image for debugging purposes.
Benefits of Using Packer
There are various benefits of using Packer in terms of performance, automation, and security:
Benefits of Using Packer
There are various benefits of using Packer in terms of performance, automation, and security:
Packer spins up an EC2 instance, creates temporary security groups for the instance, creates temporary keys, provisions the instance, creates an AMI, and terminates the instances – and it’s all completely automated
Packer uploads all the Ansible playbooks and associated variables to the remote server, and then runs the provisioner locally on that machine. It has a default staging directory (/tmp/packer-provisioner-ansible-local/) that it creates on the remote server, and this is the location where it stores all the playbooks, variables, and roles. Running the playbooks locally on the instance is much faster than running them remotely.
Packer implements parallelization of all the processes that it implements
With Packer we supply Amazon’s API keys locally. The temporary keys that are created when the instance is spun up are removed after the instance is provisioned, for increased security.
Testing Before Creating Server Images
What is ServerSpec?
ServerSpec offers RSpec tests for your provisioned server. RSpec is commonly used as a testing tool in Ruby programming, made to easily test your servers by executing few commands locally.
Integrating ServerSpec with Packer
ServerSpec was integrated with Packer right after the provisioning was complete. This was done by using Packer’s “shell” and “file” provisioners.
Automation Using Jenkins
Jenkins was used to automate the process of creating and testing images. A Jenkins job was parameterized to take inputs such as project name, username, Amazon’s API keys, test flags, etc., which allowed our engineers to build project specific image rapidly without installing Packer and its CLI tools. Jenkins took care of the AMI tagging, CLI parameters for Packer, and notifications to the engineering team about the status of the Job:
http://code.hootsuite.com/build-test-and-automate-server-image-creation/
- Vagrant
Packer
The typical workflow is for developer to create development environment in Vagrant and once it becomes stable, the production image can be built from Packer
Since the provisioning part is baked into the image, the deployment of production images becomes much faster.
Terraform allows us to describe the whole data center as a configuration file and it takes care of deploying the infrastructure on either VM, baremetal or cloud. Once the configuration file is under source code, Infrastructure can be treated as Software
it is provider agnostic and it can integrate with any provider.
https://sreeninet.wordpress.com/2016/02/06/hashicorp-atlas-workflow-with-vagrant-packer-and-terraform/
- Create and configure lightweight, reproducible, and portable development environments. https://www.vagrantup.com/
- Vagrant vs. Docker Currently, Docker lacks support for certain operating systems (such as BSD). If your target deployment is one of these operating systems, Docker will not provide the same production parity as a tool like Vagrant. Vagrant will allow you to run a Windows development environment on Mac or Linux, as well.
- Creating Vagrant Boxes with Packer
https://www.vagrantup.com/intro/vs/docker.html
all boxes will be stored and served from Terraform Enterprise, keeping a history along the way
https://www.terraform.io/docs/enterprise/packer/artifacts/creating-vagrant-boxes.html?_ga=2.89191771.1755382601.1512478494-690169036.1512478494
- Installing Vagrant and Virtual box on Ubuntu 14.04 LTS
Very often, a test environment is required for testing the latest release and new tools.
Also, it reduces the time spent in re-building your OS.
By default, vagrant uses virtualbox for managing the Virtualization.
Vagrant acts as the central configuration for managing/deploying multiple reproducible virtual environments with the same configuration.
https://www.olindata.com/en/blog/2014/07/installing-vagrant-and-virtual-box-ubuntu-1404-lts
- Sharing a common development environment with everyone on your team is important.
You can use the same file and commands to build an image on AWS, Digital Ocean or for VirtualBox and vagrant
You need Virtualbox and Packer installed.
Packer is even easier, just download the right zip for your system and unzip it into your PATH
Packer uses builders, provisioners, and post-processors as the main configuration attributes.
A builder can, for example, be VirtualBox or AWS.
A provisioner can be used to run different scripts.
Post-processors can be run after the machine image is done.
For example, converting a Virtualbox image into a suitable image for vagrant is done in a post-processor.
https://blog.codeship.com/packer-vagrant-tutorial/
- A Vagrant “.box” is simply a tarred gzip package containing, at a minimum, the VM’s metafiles, disk image(s), and a metadata.json; which identifies the provider the box has been created for.
It realizes the vision through virtualization technologies like VirtualBox.
The biggest pain point Vagrant solves is how to boot up a consistent development environment in every host.
The whole development environment is defined in Vagrantfile which is Ruby code in fact. That means the environment can be versioned with version control system like Git and thus could be shared easily.
The Vagrant command line tool creates a Vagrantfile with following content which indicates that vagrant should boot up a virtual machine based on the OS image called centos/7.
Vagrant tries to locate the OS image in the local image repo. If not found, it attempts to download the image from HashiCorp’s box search service. The path of local image repo is $HOME/.vagrant.d/boxes
Create a virtual machine based on the OS image
A typical Vagrant development environment includes 2 parts
A virtual machine
Toolchains like GCC, lib dependencies and Java running in the virtual machine
Provider means the type of underlying virtualization technology.
Vagrant is not a hypervisor itself. Instead, it is just abstraction of managing virtual machines in different hypervisors.
Box packages OS images required by the underlying hypervisor to run the virtual machine.
Typically, each hypervisor has its own OS image format.
Use Packer, another tool from HashiCorp to build boxes from iso images.
Chef provides a whole bunch of Packer templates in GitHub to make life much easier.
Use Packer, another tool from HashiCorp to build boxes from iso images. Chef provides a whole bunch of Packer templates in GitHub to make life much easier.
Create box from existing virtual machines running in the different hypervisor.
Provisioner
It is impractical to package all the development tools in a box thus the Vagrant provides facility to customize the virtual machine.
There are many deveops tools available to automate configuration management such as Ansible, Puppet, SaltStack, and Chef.
makes an abstract on different tools so the users could choose any technique they want. The tools are called provisioners.
QEMU provider: VirtualBox is great and most of the time it is adequate. But it does not support nested virtualization which is the ability to run virtual machines within another virtual machine. Most of the hypervisors like VMware Fusion, Hyper-V and QEMU except VirtualBox support such a feature. Nested virtualization is very useful when creating demo cluster. There is also a company named Ravello who has been acquired by Oracle focusing on nested virtualization technique. Hyper-V is not cross-platform and VMware Fusion/Workstation is charged. Vagrant provider for VMware Fusion/Workstation is also charged. So QEMU is another provider worthy playing with.
Docker provider: As one of the hottest techniques, Docker is worth noting. Compared with hypervisors like VirtualBox, Docker is much much more lightweight. It is not a hypervisor, but it does provide some abstraction of a virtual machine. Its overhead is very low too which makes it is easy to boot dozens of Docker containers in your laptop. Of course, it is not as versatile as hypervisors but if it meets your need, it would be a good choice.
https://blog.jeffli.me/blog/2016/12/06/a-beginners-guide-for-vagrant/
- Figure-1 depicts a typical home networking topology
The public network gives the VM the same network visibility as the Vagrant host.
The word public does not mean that the IP addresses have to be public routable IP addresses.
The doc also states that bridged networking would be more accurate
In fact, public networking is called bridged networking in the early days
in private networking, a virtualized subnet that is invisible from outside of Vagrant host will be created by the underlying provider
Forwarded ports expose the accessibility of virtual machines to the world outside the Vagrant host. In Figure-4, assume that an Nginx HTTP server runs in the virtual machine which uses private networking. Thus it is impossible to access the service from outside of the host. But through a forwarded port, the service is able to be exposed as a service of the Vagrant host which makes it publi
Public networking attaches virtual machines to the same subnet with the Vagrant host
Private networking hides virtual machines from the outside of the Vagrant host
Forwarded ports create tunnels from virtual machines to outside world
https://blog.jeffli.me/blog/2017/04/22/vagrant-networking-explained/
- Available Boxes
$ vagrant box add {title} {url}
$ vagrant init {title}
$ vagrant up
http://www.vagrantbox.es/
- There is a special category of boxes known as "base boxes." These boxes contain the bare minimum required for Vagrant to function, are generally not made by repackaging an existing Vagrant environment (hence the "base" in the "base box").
https://www.vagrantup.com/docs/boxes/base.html
- This is useful since boxes typically are not built perfectly for your use case. Of course, if you want to just use vagrant ssh and install the software by hand, that works. But by using the provisioning systems built-in to Vagrant, it automates the process so that it is repeatable. Most importantly, it requires no human interaction, so you can vagrant destroy and vagrant up and have a fully ready-to-go work environment with a single command. Powerful.
- Suspending the virtual machine by calling vagrant suspend will save the current running state of the machine and stop it.
When you are ready to begin working again, just run vagrant up,
Halting the virtual machine by calling vagrant halt will gracefully shut down the guest operating system and power down the guest machine. You can use vagrant up when you are ready to boot it again.
Destroying the virtual machine by calling vagrant destroy will remove all traces of the guest machine from your system. It'll stop the guest machine, power it down, and remove all of the guest hard disks. Again, when you are ready to work again, just issue a vagrant up.
https://www.vagrantup.com/intro/getting-started/teardown.html
- Once you have a provider installed, you do not need to make any modifications to your Vagrantfile, just vagrant up with the proper provider and Vagrant will do the rest:
https://www.vagrantup.com/intro/getting-started/providers.html
https://www.packer.io/docs/builders/index.html
- While Vagrant ships out of the box with support for VirtualBox, Hyper-V, and Docker, Vagrant has the ability to manage other types of machines as well. This is done by using other providers with Vagrant.
- Builders
https://www.packer.io/docs/builders/index.html
- VirtualBox Builder
The VirtualBox Packer builder is able to create VirtualBox virtual machines and export them in the OVA or OVF format.
https://www.packer.io/docs/builders/virtualbox.html
If, for any reason, the NAT network needs to be changed, this can be achieved with the following command:
VBoxManage modifyvm "VM name" --natnet1 "192.168/16"
This command would reserve the network addresses from 192.168.0.0 to 192.168.254.254 for the first NAT network instance of "VM name". The guest IP would be assigned to 192.168.0.15 and the default gateway could be found at 192.168.0.2.
https://www.virtualbox.org/manual/ch09.html#changenat
https://www.packer.io/docs/builders/virtualbox.html
- The builder builds a virtual machine by creating a new virtual machine from scratch, booting it, installing an OS, provisioning software within the OS, then shutting it down. The result of the VirtualBox builder is a directory containing all the files necessary to run the virtual machine portably.
- In NAT mode, the guest network interface is assigned to the IPv4 range 10.0.x.0/24 by default where x corresponds to the instance of the NAT interface +2. So x is 2 when there is only one NAT instance active. In that case the guest is assigned to the address 10.0.2.15, the gateway is set to 10.0.2.2 and the name server can be found at 10.0.2.3.
If, for any reason, the NAT network needs to be changed, this can be achieved with the following command:
VBoxManage modifyvm "VM name" --natnet1 "192.168/16"
This command would reserve the network addresses from 192.168.0.0 to 192.168.254.254 for the first NAT network instance of "VM name". The guest IP would be assigned to 192.168.0.15 and the default gateway could be found at 192.168.0.2.
https://www.virtualbox.org/manual/ch09.html#changenat
- Note that having the ansible.verbose option enabled will instruct Vagrant to show the full ansible-playbook command used behind the scene
re-run a playbook on an existing VM
$ vagrant provision
http://docs.ansible.com/ansible/latest/guide_vagrant.html
- Multi-Machine
Vagrant is able to define and control multiple guest machines per Vagrantfile.
This is known as a "multi-machine" environment
These machines are generally able to work together or are somehow associated with each other. Here are some use-cases people are using multi-machine environments for today:
Accurately modeling a multi-server production topology, such as separating a web and database server.
Modeling a distributed system and how they interact with each other.
Testing an interface, such as an API to a service component.
Disaster-case testing: machines dying, network partitions, slow networks, inconsistent worldviews, etc.
Commands that only make sense to target a single machine, such as vagrant ssh, now require the name of the machine to control. Using the example above, you would say vagrant ssh web or vagrant ssh DB.
In order to facilitate communication within machines in a multi-machine setup, the various networking options should be used. In particular, the private network can be used to make a private network between multiple machines and the host.
https://www.vagrantup.com/docs/multi-machine/
If the system cannot run Linux containers natively, Vagrant automatically spins up a "host VM" to run Docker. This allows your Docker-based Vagrant environments to remain portable, without inconsistencies depending on the platform they are running on.
Vagrant will spin up a single instance of a host VM and run multiple containers on this one VM.
By default, the host VM Vagrant spins up is backed by boot2docker, because it launches quickly and uses little resources.
But the host VM can be customized to point to any Vagrantfile.
This allows the host VM to more closely match production by running a VM running Ubuntu, RHEL, etc. It can run any operating system supported by Vagrant.
https://www.vagrantup.com/docs/docker/basics.html
This is known as a "multi-machine" environment
These machines are generally able to work together or are somehow associated with each other. Here are some use-cases people are using multi-machine environments for today:
Accurately modeling a multi-server production topology, such as separating a web and database server.
Modeling a distributed system and how they interact with each other.
Testing an interface, such as an API to a service component.
Disaster-case testing: machines dying, network partitions, slow networks, inconsistent worldviews, etc.
Commands that only make sense to target a single machine, such as vagrant ssh, now require the name of the machine to control. Using the example above, you would say vagrant ssh web or vagrant ssh DB.
In order to facilitate communication within machines in a multi-machine setup, the various networking options should be used. In particular, the private network can be used to make a private network between multiple machines and the host.
https://www.vagrantup.com/docs/multi-machine/
- The Docker provider does not require a config.vm.box setting. Since the "base image" for a Docker container is pulled from the Docker Index or built from a Dockerfile, the box does not add much value, and is optional for this provider.
If the system cannot run Linux containers natively, Vagrant automatically spins up a "host VM" to run Docker. This allows your Docker-based Vagrant environments to remain portable, without inconsistencies depending on the platform they are running on.
Vagrant will spin up a single instance of a host VM and run multiple containers on this one VM.
By default, the host VM Vagrant spins up is backed by boot2docker, because it launches quickly and uses little resources.
But the host VM can be customized to point to any Vagrantfile.
This allows the host VM to more closely match production by running a VM running Ubuntu, RHEL, etc. It can run any operating system supported by Vagrant.
https://www.vagrantup.com/docs/docker/basics.html
- Docker requires a Linux host as Docker itself leverages the LXC (Linux Containers) mechanism in Linux. This means that in order to work with Docker on non-Linux systems – Windows, Mac OS X, Solaris – we first need to set up a Virtual Machine running Linux.
Vagrant has embraced Docker as a provider, just as it supports providers for VirtualBox and VMWare. This means that a Vagrant configuration file can describe a Docker container just as it can describe the configuration of a VirtualBox VM.
One very nice additional touch is that Vagrant is aware of the fact that Docker containers cannot run natively at present on Windows or Mac OS X
When Vagrant is asked to provision a Docker container on one of these operating systems, it can either automatically engage boot2docker as a vehicle to create and run the Docker container in or provision a Linux based VM image that it then enables for Docker and creates the Docker container into
https://technology.amis.nl/2015/08/22/first-steps-with-provisioning-of-docker-containers-using-vagrant-as-provider/
One very nice additional touch is that Vagrant is aware of the fact that Docker containers cannot run natively at present on Windows or Mac OS X
When Vagrant is asked to provision a Docker container on one of these operating systems, it can either automatically engage boot2docker as a vehicle to create and run the Docker container in or provision a Linux based VM image that it then enables for Docker and creates the Docker container into
https://technology.amis.nl/2015/08/22/first-steps-with-provisioning-of-docker-containers-using-vagrant-as-provider/
- This page only documents the specific parts of the ansible_local provisioner. General Ansible concepts like Playbook or Inventory are shortly explained in the introduction to Ansible and Vagrant.
The Ansible Local provisioner requires that all the Ansible Playbook files are available on the guest machine, at the location referred by the provisioning_path option. Usually, these files are initially present on the host machine (as part of your Vagrant project), and it is quite easy to share them with a Vagrant Synced Folder.
https://www.vagrantup.com/docs/provisioning/ansible_local.html
- Boxcutter
Community-driven templates and tools for creating cloud, virtual machines, containers and metal operating system environments
https://github.com/boxcutter
- Bento is a project that encapsulates Packer templates for building Vagrant base boxes. A subset of templates are built and published to the bento org on Vagrant Cloud. The boxes also serve as default boxes for kitchen-vagrant.
https://github.com/chef/bento
Upload this file to a web server as ks.cfg. You can use nfs server too.
KVM: Install CentOS / RHEL Using Kickstart File (Automated Installation)
https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
[OPTIONAL] Copy the resulting installation file located under /root/anaconda-ks.cfg
Open your kickstart file and begin writing your desired configuration
Save it when it suits your needs and upload it to any HTTP server you have in your reach (you can alternatively use onedrive, dropbox, or even github’s gist)
alternatively, install CentOS7 in an unattended installation manner and then modify the resulting kickstarter file named ‘anaconda-ks.cfg’, located under the /root directory.
To use Kickstart, you must:
Create a Kickstart file.
Make the Kickstart file available on removable media, a hard drive or a network location.
Create boot media, which will be used to begin the installation.
Make the installation source available.
Start the Kickstart installation.
The recommended approach to creating Kickstart files is to perform a manual installation on one system first.
After the installation completes, all choices made during the installation are saved into a file named anaconda-ks.cfg, located in the /root/ directory on the installed system.
You can then copy this file, make any changes you need, and use the resulting configuration file in further installations.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-kickstart-howto
2. Create Preseed file
Pressed commands work when they are directly written inside the Kickstart file
http://gyk.lt/ubuntu-14-04-desktop-unattended-installation/
https://marclop.svbtle.com/creating-an-automated-centos-7-install-via-kickstart-file
administrators need to be able to automate the Ubuntu installation process.
Automation of any Linux installation requires an “answers file” that configures the installation with information that would otherwise require real-time manual input.
The principles of Kickstart for Ubuntu and Red Hat are the same.
Before using Kickstart on an Ubuntu release, install the applicable Kickstart configuration tool.
http://infoyard.net/linux/how-to-automate-ubuntu-installation-with-kickstart.html
1. Create a configuration file, ks.cfg, using the GUI Kickstart tool.
2. Extract the files from the Ubuntu install ISO.
3. Add the ks.cfg file to the install disk and alter the boot menu to add automatic install as an install option.
4. Reconstitute the ISO file.
http://www.ubuntugeek.com/unattended-ubuntu-installations-made-easy.html
SUSE Linux Enterprise: autoyast
Ubuntu: preseed
http://www.vm.ibm.com/education/lvc/LVC0803.pdf
http://www.briancarpio.com/2012/04/04/system-automation-part-1/
- Using kickstart, a system administrator can create a single file containing the answers to all the questions that would normally be asked during a typical RHEL Linux installation.
Upload this file to a web server as ks.cfg. You can use nfs server too.
KVM: Install CentOS / RHEL Using Kickstart File (Automated Installation)
https://www.cyberciti.biz/faq/kvm-install-centos-redhat-using-kickstart-ks-cfg/
- how to automate the installation of CentOS7 via a Kickstarter file located in an accessible web server over the Internet.
[OPTIONAL] Copy the resulting installation file located under /root/anaconda-ks.cfg
Open your kickstart file and begin writing your desired configuration
Save it when it suits your needs and upload it to any HTTP server you have in your reach (you can alternatively use onedrive, dropbox, or even github’s gist)
alternatively, install CentOS7 in an unattended installation manner and then modify the resulting kickstarter file named ‘anaconda-ks.cfg’, located under the /root directory.
- 26.2. How Do You Perform a Kickstart Installation?
To use Kickstart, you must:
Create a Kickstart file.
Make the Kickstart file available on removable media, a hard drive or a network location.
Create boot media, which will be used to begin the installation.
Make the installation source available.
Start the Kickstart installation.
The recommended approach to creating Kickstart files is to perform a manual installation on one system first.
After the installation completes, all choices made during the installation are saved into a file named anaconda-ks.cfg, located in the /root/ directory on the installed system.
You can then copy this file, make any changes you need, and use the resulting configuration file in further installations.
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/installation_guide/sect-kickstart-howto
- You can later change the password in ks.cfg file manually. If you chose to encrypt your password, the supported hash in Kickstart configuration is MD5. Use Open SSL command openssl passwd -1 yourpassword in Terminal to generate the new password.
2. Create Preseed file
Pressed commands work when they are directly written inside the Kickstart file
http://gyk.lt/ubuntu-14-04-desktop-unattended-installation/
https://marclop.svbtle.com/creating-an-automated-centos-7-install-via-kickstart-file
- In fact, some organizations dedicate multiple servers to a single service. (For example, I understand some enterprises dedicate a dozen servers just to the Domain Name Service [DNS].) With the advantages associated with virtualization, I suspect even some smaller organizations now configure 100 or more virtual servers (on fewer physical systems).
administrators need to be able to automate the Ubuntu installation process.
Automation of any Linux installation requires an “answers file” that configures the installation with information that would otherwise require real-time manual input.
The principles of Kickstart for Ubuntu and Red Hat are the same.
Before using Kickstart on an Ubuntu release, install the applicable Kickstart configuration tool.
http://infoyard.net/linux/how-to-automate-ubuntu-installation-with-kickstart.html
- Unattended Ubuntu installations made easy
1. Create a configuration file, ks.cfg, using the GUI Kickstart tool.
2. Extract the files from the Ubuntu install ISO.
3. Add the ks.cfg file to the install disk and alter the boot menu to add automatic install as an install option.
4. Reconstitute the ISO file.
http://www.ubuntugeek.com/unattended-ubuntu-installations-made-easy.html
- Introduction to automated installation mechanisms
SUSE Linux Enterprise: autoyast
Ubuntu: preseed
http://www.vm.ibm.com/education/lvc/LVC0803.pdf
- There are two most commonly used methods when it comes to automating Ubuntu installations: Preseed and Kickstart. The first one is an official method for Ubuntu to suppress all the questions in the installation process, but it has really steep learning curve if you are making automatic Ubuntu installer for the first time. Second method is really easy to start with because Ubuntu supports most of the RedHat's Kickstart options, but since it isn't an official method, we are still going to use some Preseed commands.
- how to automate the deployment, configuration, and upgrades of your Linux infrastructure.
http://www.briancarpio.com/2012/04/04/system-automation-part-1/
- Difference between FAT and SAT
Input are simulated to test the operation of the software.
Factory Acceptance Test
The main objective of the FAT is to test the instrumented safety system (logic solver and associated software). The tests are normally carried out during the final part of the design and engineering phase before the final installation in the plant. The FAT is a customized procedure to verify the instrumented safety system and the instrumented safety functions according to the specification of safety requirements.
It is important to note here that there are different levels of a FAT. They can be done at a very basic level, such as configuring the main parts of the system with temporary wiring and making sure that everything moves as it is supposed to, or a more complete FAT can be carried out where the manufacturer physically constructs the entire system in your store to test it completely. In the last example, the system is dismantled, moved to the client’s site and reassembled
Benefits of the factory acceptance test
Clients can “touch and feel” the equipment while in operational mode before sending it.
The manufacturer can provide some initial customer training, which gives operations personnel more confidence when using the machinery for the first time in real environments.
The key people of the project from both sides are together, which makes it the ideal time to review the list of materials, analyze the necessary and recommended parts (for the start-up and the first year of operation) and review the procedures for maintenance and equipment limitations.
The comprehensive FAT documentation can be used as a template for the Validated installation / installation installation qualification part.
Based on the results of the FAT, both parties can create a list of additional elements that must be addressed before shipment.
Site Acceptance Testing
The Site Acceptance Test(SAT) is performed after all system have been tested and Verified. Before this final stage, the complete system must be tested and commissioned.
One final step in many SCADA projects is the SAT. This is usually a continuous operation or run of the complete system for a period of 1–2 weeks without any major problems. if some problem is occur The parties must again meet to discuss how to handle the situation and decide if the current operation is correct or if a change is required
https://automationforum.co/difference-between-fat-and-sat/
Nice Blog, When I was read this blog, I learnt new things & it’s truly have well stuff related to developing technology, Thank you for sharing this blog. Need to learn software testing companies, please share. It is very useful who is looking for
ReplyDeleteLow code automation platform
Mobile Testing Services
QA Services