Monday, May 28, 2018

Cloud Orchestration

how to design automation processes
declarative (also often known as the model)
imperative (also often known as the workflow or procedural)

how they can be used together.

Declarative/Model-Based Automation
The fundamental of the declarative model is that of the desired state
The principle is that we declare, using a model, what a system should look like.
The model is “data driven,” which allows data, in the form of attributes or variables, to be injected into the model at run-time to bring it to a desired state.
A declarative process should not require the user to be aware of the current state
usually bring it to a required state using a concept known as idempotent

Idempotent
if you deploy version 10 of a component to a development environment and it is currently at version 8, changes 9 and 10 will be applied
If you deploy the same release to a test environment where version 5 is installed, changes 6 through 10 will be applied
if you deploy it to a production system where it has never been deployed, it will apply changes 1 through 10
Each of the deployment processes brings it to the same state regardless of where they were initially
The user, therefore, does not need to be aware of the current state

Maintaining State
as a model changes over time—and may have a current, past or future state.
how can a model be applied to a target system, how can it know what changes to make

State Management
three methods used to keep an environment or system and its desired state models in line:

Maintain an inventory of what has been deployed.
This where we maintain the state of what has been deployed, and what is in the desired state of a release, and only apply the difference
Validate/compare the desired state with what has been been deployed.
Just make it so. The most obvious example here is anything that is stateless, such as a container.you would just want to instantiate a new one that has the new configuration you require.


Imperative/Procedural/Workflow-Based Automation
a series of actions are executed in a specific order to achieve an outcome.
For an application deployment, this is where the process of “how the application needs to be deployedis defined, and a series of steps in the workflow are executed to deploy the entire application

A standard example might include
Some pre-install/validation steps
Some install/update steps
Finally, some validation to verify what we have automated has worked as expected

The often-cited criticism of this approach is that we end up with lots of separate workflows that layer changes onto our infrastructure and applications—and the relationship between these procedures are not maintained. The user, therefore, needs to be aware of the current state of a target before they know which workflow to execute. That means the principle of the desired state/idempotent is hard to maintain—and each of the workflows are tightly coupled to the applications

Puppet is an example of what is seen as a Declarative automation tool while Chef is said to be Imperative.
Do they both support concepts of the desired state and are idempotent? The answer is, of course, Yes
Is it possible to use a workflow tool to design a tightly coupled release that is not idempotent? The answer is, again, yes


What are the Benefits of Workflows?
The benefit of using a workflow is that we are able to define relationship and dependencies for our units of automation—and orchestrate them together.
Procedural workflows also allow us to, for example, deploy component A & B on Server1, then deploy component C or Server2 and then continue to deploy D on Server1. This give us much greater control in orchestrating multi-component releases.

Workflow for Applications
a stateless loosely coupled micro service on a cloud native platform

Model for Components
The concept of something being Idempotent—I can deploy any version to any target system if it has never been deployed to before
if we want to introduce a new change to an already existing environment or even if we want to roll a unit of automation back

Imperative Orchestration & Declarative Automation
a combination of Declarative (or model-driven) units of automation, coordinated by Imperative (or workflow-based) Orchestration can be used to achieve this
An imperative/workflow-based orchestrator also will allow you to executive not only declarative automation but also autonomous imperative units of automation should you need to.

My recommendation is that an application workflow defines the order and dependencies of components being deployed.
The Declarative Model for a component determines what action needs to be taken to bring a target system into compliance with the model

https://devops.com/perfect-combination-imperative-orchestration-declarative-automation/#disqus_thread


The debate hearkens back to concepts of declarative, model-based programming (the most like Puppet's approach)
The declarative approach requires that users specify the end state of the infrastructure they want, and then Puppet's software makes it happen.
some proponents of the imperative/procedural approach to automation say the declarative model can break down when there are subtle variations in the environment, since declarative configuration files must be made for every edge case in the environment.
Puppet Labs gets into the software-defined infrastructure game with new modules that allow it to orchestrate infrastructure resources such as network and storage devices.


an imperative, or procedural, approach (the one generally taken by Puppet rival Chef).
The imperative/procedural approach takes action to configure systems in a series of actions.

https://searchitoperations.techtarget.com/news/2240187079/Declarative-vs-imperative-The-DevOps-automation-debate
  • There're many ways to do this, from provisioning shell scripts to use boxes with Chef already installed. A clean, reliable,
                                                                                                                                                              and repeatable way is to use a Vagrant plugin to do just that—vagrant-omnibus. Omnibus is a packaged Chef


                                                                                                                                                              Terraform can also be used to manipulate Docker. The classical usage is against an already
                                                                                                                                                              running Docker server on the network, but it will work exactly the same locally with your own
                                                                                                                                                              Docker installation. Using Terraform for controlling Docker, we'll be able to trigger
                                                                                                                                                              Docker image updates dynamically, execute containers with every imaginable option, manipulate Docker
                                                                                                                                                              networks, and use Docker volumes.


                                                                                                                                                              Vagrant is a tool focused for managing development environments and Terraform is a tool for building infrastructure.
                                                                                                                                                              Terraform can describe complex sets of infrastructure that exists locally or remotely.

                                                                                                                                                              Vagrant provides a number of higher level features that Terraform doesn't. Synced folders, automatic networking, HTTP tunneling, and more are features provided by Vagrant to ease development environment usage. Because Terraform is focused on infrastructure management and not development environments, these features are out of scope for that project.
                                                                                                                                                              https://www.vagrantup.com/intro/vs/terraform.html

                                                                                                                                                              • we explained why we picked Terraform as our IAC tool of choice and not Chef, Puppet, Ansible, SaltStack, or CloudFormation.
                                                                                                                                                              https://blog.gruntwork.io/an-introduction-to-terraform-f17df9c6d180
                                                                                                                                                              • Terraform and OpenStack
                                                                                                                                                              Terraform has a number of modules that will allow you manage your OpenStack infrastructure. It does support some (but not all) of the components of OpenStack, namely:
                                                                                                                                                                  Block Storage
                                                                                                                                                                  Compute
                                                                                                                                                                  Networking
                                                                                                                                                                  Load Balancer
                                                                                                                                                                  Firewall
                                                                                                                                                                  Object Storage
                                                                                                                                                              https://www.stratoscale.com/blog/openstack/tutorial-how-to-use-terraform-to-deploy-openstack-workloads/


                                                                                                                                                              • Terraform is a tool from HashiCorp that can be used to deploy and manage the cloud infrastructure easily by defining configuration files. It is similar to OpenStack Heat. However, unlike Heat which is specific to OpenStack, Terraform is provider-agnostic and can work with multiple cloud platforms such as OpenStack, AWS and VMware.
                                                                                                                                                              Terraform supports OpenStack Compute resources such as Instance, Floating i/p, Key Pair, Security Group and Server Group. It supports OpenStack Network resources such as Network, Subnet, Router and Router interface, Floating i/p.  For Block Storage it supports Volume and for Object Storage it supports Containers.
                                                                                                                                                              https://platform9.com/blog/how-to-use-terraform-with-openstack/
                                                                                                                                                              • Terraform vs. CloudFormation, Heat, etc.
                                                                                                                                                              Tools like CloudFormation, Heat, etc. allow the details of an infrastructure to be codified into a configuration file.
                                                                                                                                                              Terraform similarly uses configuration files to detail the infrastructure setup, but it goes further by being both cloud-agnostic and enabling multiple providers and services to be combined and composed.
                                                                                                                                                              The configuration files allow the infrastructure to be elastically created, modified and destroyed. Terraform is inspired by the problems they solve.
                                                                                                                                                              For example, Terraform can be used to orchestrate an AWS and OpenStack cluster simultaneously, while enabling 3rd-party providers like Cloudflare and DNSimple to be integrated to provide CDN and DNS services.
                                                                                                                                                              https://www.terraform.io/intro/vs/cloudformation.html

                                                                                                                                                              • Terraform vs. Boto, Fog, etc.
                                                                                                                                                              Libraries like Boto, Fog, etc. are used to provide native access to cloud providers and services by using their APIs.
                                                                                                                                                              https://www.terraform.io/intro/vs/boto.html
                                                                                                                                                              • Terraform vs. Chef, Puppet, etc.

                                                                                                                                                              Configuration management tools install and manage software on a machine that already exists. Terraform is not a configuration management tool, and it allows existing tooling to focus on their strengths: bootstrapping and initializing resources
                                                                                                                                                              https://www.terraform.io/intro/vs/chef-puppet.html
                                                                                                                                                              • Terraform is a tool for building, changing, and versioning infrastructure safely and efficiently. Terraform can manage existing and popular service providers as well as custom in-house solutions.
                                                                                                                                                              Configuration files describe to Terraform the components needed to run a single application or your entire datacenter. Terraform generates an execution plan describing what it will do to reach the desired state and then executes it to build the described infrastructure
                                                                                                                                                              The infrastructure Terraform can manage includes low-level components such as compute instances, storage, and networking, as well as high-level components such as DNS entries, SaaS features, etc.

                                                                                                                                                              Terraform is used to create, manage, and update infrastructure resources such as physical machines, VMs, network switches, containers, and more. Almost any infrastructure type can be represented as a resource in Terraform.

                                                                                                                                                              A provider is responsible for understanding API interactions and exposing resources. Providers generally are an IaaS (e.g. AWS, GCP, Microsoft Azure, OpenStack), PaaS (e.g. Heroku), or SaaS services (e.g. Terraform Enterprise, DNSimple, CloudFlare).
                                                                                                                                                              https://www.terraform.io/docs/providers/index.html


                                                                                                                                                              • Provisioners are used to execute scripts on a local or remote machine as part of resource creation or destruction. Provisioners can be used to bootstrap a resource, cleanup before destroy, run configuration management, etc.

                                                                                                                                                              https://www.terraform.io/docs/provisioners/index.html
                                                                                                                                                              • Step 2 — Setting Up a Virtual Environment
                                                                                                                                                              Virtual environments enable you to have an isolated space on your computer for Python projects, ensuring that each of your projects can have its own set of dependencies that won’t disrupt any of your other projects.
                                                                                                                                                              https://www.digitalocean.com/community/tutorials/how-to-install-python-3-and-set-up-a-local-programming-environment-on-ubuntu-16-04

                                                                                                                                                              • Terraform is to easily deploy our infrastucture and orchestrate our Docker environment.
                                                                                                                                                              The base idea is to manage and provision your infrastructure by code (configuration files, scripts, etc).
                                                                                                                                                              Terraform is an example of IaC that converts your infrastructure to human friendly config files that are JSON compatible.
                                                                                                                                                              http://t0t0.github.io/internship%20week%209/2016/05/02/terraform-docker.html

                                                                                                                                                              • Top 3 Terraform Testing Strategies for Ultra-Reliable Infrastructure-as-Code


                                                                                                                                                              Aside from CloudFormation for AWS or OpenStack Heat, it's the single most useful open-source tool out there for deploying and provisioning infrastructure on any platform.
                                                                                                                                                              This post will also briefly cover deployment strategies for your infrastructure as they relate to testing. 

                                                                                                                                                              Software developers use unit testing to check that individual functions work as they should. 
                                                                                                                                                              They then use integration testing to test that their feature works well with the application as a whole.

                                                                                                                                                              Being able to use terraform plan to see what Terraform will do before it does it is one of Terraform's most stand-out features.
                                                                                                                                                              it was designed for practitioners to see what would happen before a run occurred, check that everything looks good before applying it and then running terraform apply to add the finishing touches.

                                                                                                                                                              Disadvantages
                                                                                                                                                              It's hard to spot mistakes this way 
                                                                                                                                                              wanted to deploy five replicas of your Kubernetes master instead of three.

                                                                                                                                                              You check the plan really quickly and apply it, thinking that all is good.
                                                                                                                                                              deployed three masters instead of five
                                                                                                                                                              The remediation is harmless in this case: change your variables.tf to deploy five Kubernetes masters and redeploy. 
                                                                                                                                                              However, the remediation would be much more painful if this were done in production and your feature teams had already deployed services onto it.

                                                                                                                                                              trudging through chains of repositories or directories to find out what a Terraform configuration is doing is annoying at best and damaging at worst

                                                                                                                                                              To avoid the consequences from the approach above, your team might decide to use something like serverspec, Goss and/or InSpec to execute your plans first into a sandbox, automatically confirm that everything looks good then tear down your sandbox and collect results. 
                                                                                                                                                              If this were a pipeline in Jenkins or Bamboo CI
                                                                                                                                                              Orange border indicates steps executed within sandbox.

                                                                                                                                                              Advantages
                                                                                                                                                              Removes the need for long-lived development environments and encourages immutable infrastructure

                                                                                                                                                              Well-written integration tests provide enough confidence to do away with this practice completely.
                                                                                                                                                              Every sandbox environment created by an integration test will be an exact replica of production because every sandbox environment will ultimately become production.
                                                                                                                                                              This provides a key building block towards infrastructure immutability whereby any changes to production become part of a hotfix or future feature release, and no changes to production are allowed or even needed.

                                                                                                                                                              Documents your infrastructure
                                                                                                                                                              You no longer have to wade through chains of modules to make sense of what your infrastructure is doing. If you have 100% test coverage of your Terraform code (a caveat that is explained in the following section), your tests tell the entire story and serve as a contract to which your infrastructure must adhere

                                                                                                                                                              Allows for version tagging and "releases" of your infrastructure
                                                                                                                                                              Because integration tests are meant to test your entire system cohesively, you can use them to tag your Terraform code with git tag or similar. This can be useful for rolling back to previous states (especially when combined with a blue/green deployment strategy) or enabling developers within your organization to test differences in their features between iterations of your infrastructure.

                                                                                                                                                              They can serve as a first-line-of-defense 
                                                                                                                                                              let's say that you created a pipeline in Jenkins or Bamboo that runs integration tests against your Terraform infrastructure twice daily and pages you if an integration test fails.

                                                                                                                                                              you receive an alert saying that an integration test failed. Upon checking the build log, you see an error from Chef saying that it failed to install IIS because the installer could not be found. After digging some more, you discover that the URL that was provided to the IIS installation cookbook has expired and needs an update

                                                                                                                                                              After cloning the repository within which this cookbook resides, updating the URL, re-running your integration tests locally and waiting for them to pass, you submit a pull request to the team that owns this repository asking them to integrate

                                                                                                                                                              You just saved yourself a work weekend by fixing your code proactively instead of waiting for it to surface come release time.


                                                                                                                                                              Disadvantages

                                                                                                                                                              It can get quite slow
                                                                                                                                                              Depending on the number of resources your Terraform configuration creates and the number of modules they reference, doing a Terraform run might be costly

                                                                                                                                                              It can also get quite costly
                                                                                                                                                              Performing a full integration test within a sandbox implies that you mirror your entire infrastructure (albeit at a smaller scale with smaller compute sizes and dependencies) for a short time

                                                                                                                                                              Obtaining code coverage is hard
                                                                                                                                                              Terraform doesn't yet have a framework for obtaining a percentage of configurations that have a matching integration test.
                                                                                                                                                              This means that teams that choose to embark on this journey need to be fastidious about maintaining a high bar for code coverage and will likely write tools themselves that can do this (such as scanning all module references and looking for a matching spec definition).


                                                                                                                                                              kitchen-terraform is the most popular integration testing framework for Terraform at the moment.
                                                                                                                                                              Goss is a simple validation/health-check framework that lets you define what a system should look like and either validates against that definition or provides an endpoint

                                                                                                                                                              integration testing enables you to test interactions between components in an entire system. 
                                                                                                                                                              Unit testing, on the other hand, enables you to test those individual components in isolation. 


                                                                                                                                                              Advantages

                                                                                                                                                              It enables test-driven development 
                                                                                                                                                              Test-driven development is a software development pattern whereby every method in a feature is written after writing a test describing what that feature is expected to do.

                                                                                                                                                              Faster than integration tests 

                                                                                                                                                              Disadvantages
                                                                                                                                                              unit tests complement integration tests. They do not replace them. 


                                                                                                                                                              https://www.contino.io/insights/top-3-terraform-testing-strategies-for-ultra-reliable-infrastructure-as-code
                                                                                                                                                              • Cloudify is an open source cloud orchestration platform, designed to automate the deployment, configuration and remediation of application and network services across hybrid cloud and stack environments.
                                                                                                                                                              Cloudify uses a declarative approach based on TOSCA, in which users focus on defining the desired state of the application through a simple DSL, and Cloudify takes care of reaching this state, all while continuously monitoring the application to ensure that it maintains the desired SLAs in the case of failure or capacity shortage.
                                                                                                                                                              https://cloudify.co/product/


                                                                                                                                                              The new Cloudify release now provides coverage for 90% of the workloads being used in large enterprises today - from hybrid cloud models through our plugin support of the top 5 clouds, alongside containerized and non-containerized workloads
                                                                                                                                                              http://getcloudify.org/

                                                                                                                                                              • Cloudify vs. Terraform; How they compare
                                                                                                                                                              There are many ways to break down the types of automation, whether it’s imperative versus declarative, or orchestration versus desired configuration state. 

                                                                                                                                                              Terraform Strengths
                                                                                                                                                              Diversity
                                                                                                                                                              Easy to Get Started
                                                                                                                                                              Fast Standup
                                                                                                                                                              Wide Adoption (and Open Source)


                                                                                                                                                              Cloudify Strengths 
                                                                                                                                                              Full-Scale Service Orchestration
                                                                                                                                                              Controlled Operations as Code or GUI

                                                                                                                                                              Terraform Use Cases
                                                                                                                                                              Terraform has become even easier to use to create entire environments that can be managed from a single state file.
                                                                                                                                                              Creating Terraform stacks for lab, POC, and testing environments is fantastic considering the ease and speed of deployment
                                                                                                                                                              Terraform really shines when it is managing stacks utilizing statelessness, such as auto-scaling groups, lambda functions, and network resources.
                                                                                                                                                              It’s better to avoid having Terraform manage individual stateful instances or volumes because it can easily destroy resources.

                                                                                                                                                              Cloudify Use Cases
                                                                                                                                                              Instead of having deployments such as “Web frontend” or “Mongo Cluster,” blueprints can better resemble “ERP system,” or “BI Solution.”
                                                                                                                                                              Cloudify is TOSCA compliant and is designed to provide end-to-end orchestration.
                                                                                                                                                              This cradle-to-grave construct means that users can deploy an environment and manage that environment through Cloudify until decommission. This provides enterprise customers change management, auditing, and provisioning capabilities to control deployments of interdependent resources through their entire lifecycle
                                                                                                                                                              Blueprints can be a single component or mixed ecosystems containing thousands of servers.

                                                                                                                                                              https://cloudify.co/2018/10/22/terraform-vs-cloudify/

                                                                                                                                                              • InfraKit is a toolkit for infrastructure orchestration. With an emphasis on immutable infrastructure, it breaks down infrastructure automation and management processes into small, pluggable components. These components work together to actively ensure the infrastructure state matches the user's specifications.

                                                                                                                                                              https://github.com/docker/infrakit

                                                                                                                                                              • Juju deploys everywhere: to public or private clouds.

                                                                                                                                                              https://jujucharms.com/
                                                                                                                                                              Getting started with Foreman
                                                                                                                                                              Today, there is a great number of available tools allowing quick OS deployment and configuration, status monitoring and maintenance of the desired configuration. Here, the absolute leader for Win is SCCM. While full-featured analogues for *nix have just started to take on momentum. Nowadays, an administrator has to cope with a variety of tools and each of them performs its own role. This is convenient for development, but greatly complicates the support, while the results are not quite obvious. Foreman project, to be more precise, The Foreman is, in fact, an add-on for some open source solutions, which provides system management throughout system lifecycles from deployment and configuration to monitoring (Provisioning, Configuration, Monitoring). With it you can easily automate any repetitive tasks, manage changes on thousands of servers located on bare hardware or in the cloud, monitoring their status. The concept of server groups “config group” allows giving commands to multiple systems regardless of their location.
                                                                                                                                                              For example, Foreman is used in RHOS to configure the nodes. It is written using Ruby and JavaScript. Foreman operates in two modes:
                                                                                                                                                              Foreman consists of several components that can be deployed either on a single server, or on multiple servers:
                                                                                                                                                              Smart Proxy — is an autonomous web component which is placed on the host and allows Foreman connection to TFTP, DHCP (ISC DHCP, MS DHCP), DNS (Bind, MS DNS), Chef Proxy, Realm (FreeIPA), Puppet and Puppet CA. One Smart Proxy can manage multiple services, but the autonomous installation is also possible;
                                                                                                                                                              WebGUI, CLI and API management interfaces;
                                                                                                                                                              Configuration Management — the complete solution for configuration management based on Puppet and Chef, including Puppet ENC (external node classifier) with integrated support for parameterized classes and parameter hierarchy;
                                                                                                                                                              DBMS (MySQL, PostgreSQL or SQLite)— storage of settings and metadata managed computers.
                                                                                                                                                              https://hackmag.com/devops/getting-started-with-foreman/
                                                                                                                                                              • Foreman is a complete lifecycle management tool for physical and virtual servers. We give system administrators the power to easily automate repetitive tasks, quickly deploy applications, and proactively manage servers, on-premise or in the cloud.

                                                                                                                                                              https://www.theforeman.org/

                                                                                                                                                              How to get started with the Foreman sysadmin tool
                                                                                                                                                              Full Stack Automation with Katello & The Foreman
                                                                                                                                                              Life cycle management with Foreman and Puppet
                                                                                                                                                              Red Hat Satellite 6 comes with improved server and cloud management

                                                                                                                                                              • Cobbler is an install server; batteries are included
                                                                                                                                                              Cobbler is a Linux installation server that allows for rapid setup of network installation environments. It glues together and automates many associated Linux tasks so you do not have to hop between lots of various commands and applications when rolling out new systems, and, in some cases, changing existing ones.
                                                                                                                                                              https://fedorahosted.org/cobbler/


                                                                                                                                                              • CF BOSH is a cloud-agnostic open source tool for release engineering, deployment, and lifecycle management of complex distributed systems.

                                                                                                                                                              https://www.cloudfoundry.org/bosh/


                                                                                                                                                              • BOSH is a project that unifies release engineering, deployment, and lifecycle management of small and large-scale cloud software. BOSH can provision and deploy software over hundreds of VMs. It also performs monitoring, failure recovery, and software updates with zero-to-minimal downtime.

                                                                                                                                                              In addition, BOSH supports multiple Infrastructure as a Service (IaaS) providers like VMware vSphere, Google Cloud Platform, Amazon Web Services EC2, Microsoft Azure, and OpenStack. There is a Cloud Provider Interface (CPI) that enables users to extend BOSH to support additional IaaS providers such as Apache CloudStack and VirtualBox.
                                                                                                                                                              https://bosh.io/docs/


                                                                                                                                                              • Python library for interacting with many of the popular cloud service providers using a unified API.

                                                                                                                                                              Resource you can manage with Libcloud are divided in the following categories:
                                                                                                                                                              Cloud Servers and Block Storage - services such as Amazon EC2 and Rackspace CloudServers
                                                                                                                                                              Cloud Object Storage and CDN - services such as Amazon S3 and Rackspace CloudFiles
                                                                                                                                                              Load Balancers as a Service - services such as Amazon Elastic Load Balancer and GoGrid LoadBalancers
                                                                                                                                                              DNS as a Service - services such as Amazon Route 53 and Zerigo
                                                                                                                                                              https://libcloud.apache.org/

                                                                                                                                                              Monday, May 21, 2018

                                                                                                                                                              OSI model

                                                                                                                                                              • Not every network uses all of the model’s layers.  ISO’s intent in creating the OSI model
                                                                                                                                                              wasn’t to describe every network but to give protocol designers a map to follow to aid in
                                                                                                                                                              design.  This model is useful  for conceptualizing network components to demonstrate
                                                                                                                                                              how they fit together to help the computers within the network communicate.
                                                                                                                                                              The OSI reference model was formulated as a template for the structure of communications systems.  It was not intended that there should be stand
                                                                                                                                                              ard protocols associated with each layer.  Instead, a number of different protocols have been developed each offering a different functionality


                                                                                                                                                              Physical layer. Nmap unavoidably uses this layer, though it is not usually concerned with it. It doesn't matter if you are using Cat 5 cable, 2.4 GHz radio, or coaxial cable—you can't use a network without having a physical layer. Nmap has no idea what it is, either; the firmware in your network card handles that.

                                                                                                                                                              Data link layer. Here again, Nmap has to use this layer or nothing gets sent to the destination. But there are some cases where Nmap is aware of what layer-2 protocols are in use. These all require root privileges to work:
                                                                                                                                                                  On Windows, Nmap can't send raw IP packets (more on this in the next layer), so it falls back to sending raw Ethernet (layer 2) frames instead. This means that it can only work on Ethernet-like data links—WiFi is fine, but PPTP doesn't work.
                                                                                                                                                                  There are some NSE scripts that probe layer-2 protocols: lltd-discovery, broadcast-ospf2-discovery, sniffer-detect, etc.
                                                                                                                                                                  If the target is on the same data link, Nmap will use ARP to determine if the IP address is responsive. It will then report the MAC address of the target. For IPv6 targets, Neighbor Discovery packets are used instead.

                                                                                                                                                              Network layer. Nmap supports both IPv4 and IPv6 network layer protocols. For port scans (except -sT TCP Connect scan), Nmap builds the network packet itself and sends it out directly, bypassing the OS's network stack. This is also where --traceroute happens, by sending packets with varying small Time To Live (TTL) values to determine the address where each one expires. Finally, part of the input into OS detection comes from the network layer: initial TTL values, IP ID analysis, ICMP handling, etc.


                                                                                                                                                              Transport layer. This is where the "port scanner" core of Nmap works. A port is a transport layer address; some of them may be used by services on the target ("open" ports), and others may be unused ("closed" ports). Nmap can scan 3 different transport layers protocols: TCP, UDP, and SCTP. The majority of inputs to OS detection come from here: TCP options, sequence number analysis, window size, etc.

                                                                                                                                                              Application layer. This is where version detection (-sV) takes over, sending various strings of data (probes) to open services to get them to respond in unique ways. SSL/TLS is handled specially, since other services may be layered over it (in which case it provides something like an OSI Session Layer). This is also where the vast majority of NSE scripts do their work, probing services like HTTP, FTP, SSH, RDP, and SMB.

                                                                                                                                                              Obviously layer 1 packets are sent, but nmap isn't really aware of them
                                                                                                                                                              When on the same local network, nmap pays attention to MAC addresses and ARP. This helps with vendor detection, as well as giving you network distance information
                                                                                                                                                              layer 3 (network layer) is used for sending packets, for detecting whether the host is up.
                                                                                                                                                              the transport layer (layer 4) is used for things like SYN scans, and to detect which ports are open. Sequence number detection, which happens at layer 4 is important to OS detection.
                                                                                                                                                              https://stackoverflow.com/questions/47210759/which-layer-in-the-osi-model-does-a-network-scan-work-on


                                                                                                                                                              Trace route works on network layer of OSI Model.firstly i try to explain how trace route work.
                                                                                                                                                              traceroute and tracert, is a utility that maps the path between the tested hosts. The results are then displayed as a list of hops. The information provided could be used to identify a weak link along the route. If the test fails at a certain point, the IP address of the last router that responded properly is known, so the problem could then be identified more easily.
                                                                                                                                                              Its uses ICMP packets and relies on a function called TTL – (Time to Live) in the header of this Layer 3 protocol. The value is used to set the maximum number of hops a packet can travel. When a packet is received on a router, the TTL value is lowered by 1. When the TTL reaches 0, the packet is dropped.
                                                                                                                                                              The Windows command is tracert and the Linux one is traceroute.

                                                                                                                                                              https://www.quora.com/What-trace-route-works-on-which-layer



                                                                                                                                                              OSI Model Explained | Real World Example

                                                                                                                                                              • Connection-Oriented and Connectionless Protocols in TCP/IP


                                                                                                                                                              Looking again at TCP/IP, it has two main protocols that operate at the transport layer of the OSI Reference Model. One is the Transmission Control Protocol (TCP), which is connection-oriented; the other, the User Datagram Protocol (UDP), is connectionless. TCP is used for applications that require the establishment of connections (as well as TCP’s other service features), such as FTP

                                                                                                                                                              Even though a TCP connection can be used to send data back and forth between devices, all that data is indeed still being sent as packets; there is no real circuit between the devices. This means that TCP must deal with all the potential pitfalls of packet-switched communication, such as the potential for data loss or receipt of data pieces in the incorrect order. Certainly, the existence of connection-oriented protocols like TCP doesn't obviate the need for circuit switching technologies

                                                                                                                                                              The principle of layering also means that there are other ways that connection-oriented and connectionless protocols can be combined at different levels of an internetwork.
                                                                                                                                                              Just as a connection-oriented protocol can be implemented over an inherently connectionless protocol, the reverse is also true
                                                                                                                                                              a connectionless protocol can be implemented over a connection-oriented protocol at a lower level. In a preceding example, I talked about Telnet (which requires a connection) running over IP (which is connectionless). In turn, IP can run over a connection-oriented protocol like ATM.
                                                                                                                                                              http://www.tcpipguide.com/free/t_ConnectionOrientedandConnectionlessProtocols-3.htm

                                                                                                                                                              • a basic understanding of the layered nature of modern networking architecture,The Open System Interconnection (OSI) Reference Model

                                                                                                                                                              Even though packets may be used at lower layers for the mechanics of sending data, a higher-layer protocol can create logical connections through the use of messages sent in those packets.

                                                                                                                                                              Circuit-switched networking technologies are inherently connection-oriented, but not all connection-oriented technologies use circuit switching. Logical connection-oriented protocols can in fact be implemented on top of packet switching networks to provide higher-layer services to applications that require connections.
                                                                                                                                                              http://www.tcpipguide.com/free/t_ConnectionOrientedandConnectionlessProtocols-2.htm

                                                                                                                                                              Data Encapsulation OSI TCPIP

                                                                                                                                                               
                                                                                                                                                              OSI Encapsulation
                                                                                                                                                               
                                                                                                                                                              Understanding the OSI Reference Model: Cisco Router Training 101
                                                                                                                                                              • What is OSI model?



                                                                                                                                                              OSI stands for = Open Systems Interconnection (OSI) model
                                                                                                                                                              OSI model is a reference model containing 7 layers such as physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer.
                                                                                                                                                              It is a prescription of characterizing and standardizing the functions of a communications system in terms of abstraction layers. Similar communication functions are grouped into logical layers. A layer serves the layer above it and is served by the layer below it




                                                                                                                                                              What is TCP/IP model?


                                                                                                                                                                TCP/IP model is an implementation of OSI reference model. It has five layers. They are: Network layer, Internet layer, Transport layer and Application layer.




                                                                                                                                                                What are the differences between OSI and TCP/IP model?


                                                                                                                                                                  Important differences are:

                                                                                                                                                                  OSI is a reference model and TCP/IP is an implementation of OSI model.

                                                                                                                                                                  OSI has 7 layers whereas TCP/IP has only 4 layers The upper 3 layers of the OSI model is combined on the TCP/IP model.

                                                                                                                                                                  OSI has: physical layer, data link layer, network layer, transport layer, session layer, presentation layer and application layer

                                                                                                                                                                  TCP/IP has : Network layer, Internet layer, transport layer and application layer.




                                                                                                                                                                  Explain in detail the process of sending a piece of information from a host on subnet A to a host on subnet B.


                                                                                                                                                                    What I'm looking for:

                                                                                                                                                                    Some knowledge of the OSI model

                                                                                                                                                                    The concept of layers, layer units, and encapsulation.

                                                                                                                                                                    The concept of MTU/fragmentation (not required, but nice if they know it)

                                                                                                                                                                    The address resolution process at layer 3 (DNS)

                                                                                                                                                                    The determination of local vs. non-local addresses (subnet masks/what are subnets/when to use a default gateway)

                                                                                                                                                                    The address resolution process at layer 2 (ARP)

                                                                                                                                                                    At least a vague understanding of layer 1 and associated issues


                                                                                                                                                                    Protocols according to layers


                                                                                                                                                                      Data Link Layer
                                                                                                                                                                      ARP/RARP Address Resolution Protocol/Reverse Address

                                                                                                                                                                      Network Layer
                                                                                                                                                                      DHCP Dynamic Host Configuration Protocol
                                                                                                                                                                      ICMP/ICMPv6 Internet Control Message Protocol
                                                                                                                                                                      IP Internet Protocol version 4
                                                                                                                                                                      IPv6 Internet Protocol version 6

                                                                                                                                                                      Transport Layer
                                                                                                                                                                      TCP Transmission Control Protocol
                                                                                                                                                                      UDP User Datagram Protocol


                                                                                                                                                                      Session Layer
                                                                                                                                                                      DNS Domain Name Service
                                                                                                                                                                      NetBIOS/IP NetBIOS/IP for TCP/IP Environment
                                                                                                                                                                      LDAP Lightweight Directory Access Protocol



                                                                                                                                                                      Application Layer
                                                                                                                                                                      FTP File Transfer Protocol
                                                                                                                                                                      HTTP Hypertext Transfer Protocol
                                                                                                                                                                      IMAP4 Internet Message Access Protocol rev 4
                                                                                                                                                                      NTP Network Time Protocol
                                                                                                                                                                      POP3 Post Office Protocol version 3
                                                                                                                                                                      SMTP Simple Mail Transfer Protocol
                                                                                                                                                                      SNMP Simple Network Management Protocol
                                                                                                                                                                      SOCKS Socket Secure (Server)
                                                                                                                                                                      TELNET TCP/IP Terminal Emulation Protocol




                                                                                                                                                                      References:
                                                                                                                                                                      http://rancidtaste.hubpages.com/hub/OSI-Reference-Model-and-TCP-IP-Model-Interview-Questions-and-Answers
                                                                                                                                                                      http://www.protocols.com/pbook/tcpip1.htm

                                                                                                                                                                      1. please-physical layer-Bits-Hubs,Repeater live
                                                                                                                                                                      2. do-Data link layer-Frames-Switches,Bridges live-MAC,Physical addressing
                                                                                                                                                                      3. not -Network layer-Packets-Routers live,IP Addressing,logical addressing
                                                                                                                                                                      4. throw-Transport layer-Segments-TCP,UDP
                                                                                                                                                                      5. sausage-Session Layer-data
                                                                                                                                                                      6. pizza-presentation layer-data
                                                                                                                                                                      7. away-application layer-data
                                                                                                                                                                      • OSI Model Explained CCNA - Part 1
                                                                                                                                                                      1. please-physical layer-Bits-Hubs,Repeater
                                                                                                                                                                      2. do-Data link layer-Frames-atm,frame relay,switches,
                                                                                                                                                                      3. not -Network layer-Packets or Datagrams-IP,IPV4,IPV6,IPSEC,IPX,routers
                                                                                                                                                                      4. throw-Transport layer-Segments-TCP,UDP
                                                                                                                                                                      5. sausage-Session Layer-data-sessions between local and remote hosts
                                                                                                                                                                      6. pizza-presentation layer-data-ascii,jpeg,mpeg etc deals with data formating
                                                                                                                                                                      7. away-application layer-data- ftp,http,telnet,dns,dhcp etc deals with protocols
                                                                                                                                                                      OSI Model quick and dirty
                                                                                                                                                                      • Problems with TCP/IP
                                                                                                                                                                      2.1 Built for the Wide Area

                                                                                                                                                                      TCP/IP was originally designed, and is usually implemented, for wide-area networks. While TCP/IP is usable on a local-area network, it is not optimized for this domain. For example, TCP uses an in-packet checksum for end-to-end reliability, despite the presence of per-packet CRC's in most modern network hardware. But computing this checksum is expensive, creating a bottleneck in packet processing. IP uses header fields such as `Time-To-Live' which are only relevant in a wide-area environment. IP also supports internetwork routing and in-flight packet fragmentation and reassembly, features which are not useful in a local-area environment. The TCP/IP model assumes communication between autonomous machines that cooperate only minimally. However, machines on a local-area network frequently share a common administrative service, a common file system, and a common user base. It should be possible to extend this commonality and cooperation into the network communication software.

                                                                                                                                                                      2.2 Multiple Layers
                                                                                                                                                                      Standard implementations of the Sockets interface and the TCP/IP protocol suite separate the protocol and interface stack into multiple layers. The Sockets interface is usually the topmost layer, sitting above the protocol. The protocol layer may contain sub-layers: for example, the TCP protocol code sits above the IP protocol code. Below the protocol layer is the interface layer, which communicates with the network hardware. The interface layer usually has two portions, the network programming interface, which prepares outgoing data packets, and the network device driver, which transfers data to and from the network interface card (NIC).
                                                                                                                                                                      This multi-layer organization enables protocol stacks to be built from many combinations of protocols, programming interfaces, and network devices, but this flexibility comes at the price of performance. Layer transitions can be costly in time and programming effort. Each layer may use a different abstraction for data storage and transfer, requiring data transformation at every layer boundary. Layering also restricts information transfer. Hidden implementation details of each layer can cause large, unforeseen impacts on performance.Also, the number of programming interfaces and protocols is small: there are two programming interfaces (Berkeley Sockets and the System V Transport Layer Interface) and only a few data transfer protocols (TCP/IP and UDP/IP) in widespread usage. This paucity of distinct layer combinations means that the generality of the multi-layer organization is wasted. Reducing the number of layers traversed in the communications stack should reduce or eliminate these layering costs for the common case of data transfer.

                                                                                                                                                                      2.3 Complicated Memory Management

                                                                                                                                                                      Current TCP/IP implementations use a complicated memory management mechanism. This system exists for a number of reasons. First, a multi-layered protocol stack means packet headers are added (or removed) as the packet moves downward (or upward) through the stack. This should be done easily and efficiently, without excessive copying. Second, buffer memory inside the operating system kernel is a scarce resource; it must be managed in a space-efficient fashion.
                                                                                                                                                                      https://www.usenix.org/legacy/publications/library/proceedings/ana97/full_papers/rodrigues/rodrigues_html/node2.html

                                                                                                                                                                      • The term PDU is used to refer to the packets in different layers of the OSI model. Thus PDU gives an abstract idea of the data packets. The PDU has a different meaning in different layers still we can use it as a common term. To give a clear picture:-

                                                                                                                                                                          The PDU of Transport Layer is called as a Segment.
                                                                                                                                                                          The PDU of Network Layer is called as a Packet.
                                                                                                                                                                          The PDU of the Data-Link Layer is called Frames.
                                                                                                                                                                      https://www.geeksforgeeks.org/difference-between-segments-packets-and-frames/