Wednesday, February 26, 2020

HyperV interview questions


  • HyperV  interview questions


Server Virtualization:
Server Virtualization enables multiple operating systems can run on a single hosting server.
In Server virtualization, physical server resources are abstracted logically to create and run virtual machines. (ESXi, KVM, Hyper-V).

Storage Virtualization:
Storage virtualization enables grouping of multiple physical storage disks into single logical storage and it will be presented as single storage to all the servers. VMware vSAN is the best example of storage virtualization.

Network Virtualization:
Network Virtualization allows making a complete software-defined network (SDN) by decoupling the virtual network from the underlying network resources. Network virtualization is the process of combining hardware network resources and software network resources in a single software-based administrative entity.

Desktop Virtualization:
It enables to deploy multiple desktops on few server hardware and accessed by users from any location. VMware Horizon View is the best example for desktop virtualization.

Application Virtualization:
Application virtualization enables to use of the application anywhere without installing the software our device.

What is Type 1 Hypervisor?
Type 1 is called as bare metal Hypervisor which directly installed on physical servers. It offers high-performance and lower resource usage. Example: VMware ESXi, Xen Servers, Hyper-V

What is Type 2 Hypervisor?
Type 2 is called as Hosted Hypervisor which runs on top of operating systems. This type of hypervisor will be installed as an application. It offers moderate performance. But it’s very easy to setup and manages the environment. Example: VMware Workstation, Oracle Virtual Box.

https://www.unixarena.com/2019/08/virtualization-hypervisor-basic-interview-questions.html/

VMware Interview Questions


  • VMware Interview Questions


Explain what is hypervisor
A hypervisor is a program that enables multiple operating systems to share a single hardware host.
The hypervisor controls the resources and host processor, allocating what is required for each operating system in turn and make sure that the guest operating system cannot disrupt each other.

Explain VMware DRS?
VMware DRS stands for Distributed Resource Scheduler; it dynamically balances resources across various host under a cluster or resource pool.

Define the term VMKenel’.
VMWare Kernel is a proprietary kernel of VMware. It needs an operating system to boot and manage the kernel. A service console is being offered whenever VMWare kernel is booted.

 What is the use of Promiscuous Mode?
Promiscuous mode is useful when you want to run a virtual machine with network sniffers helps you to capture packet of that network. Moreover, if the promiscuous mode set to accept, all the communication is visible to all the virtual machines.

What is Cold and Hot Migration?
When you migrate powered off or suspended, it is known as cold migration. When you migrate your running power on virtual machines, it is known as hot migration.

What is Virtual Desktop Infrastructure?
Virtual Desktop Infrastructure which also known as VDI allows you to host the desktop operating system on the centralized window server in a data center. It is also known as server-based computing as it is the variation on the client-server computing model.

Explain the importance of snapshot in VMWare
A VMWare snapshot is a copy of a virtual machine disk file which is used to restore a VM to a specific point in time when the system fails, or system error occurs.

What is VVol?
Virtual Volume known as VVol is a new VM disk management feature concept introduced in vSphere 6.0. It enables array-based operation at the virtual disk level.  It is automatically created when a virtual disk is created in a virtual environment.

Can we do vMotion between two data centers?
Yes, we can do vMotion between two datacenters. However,  for this VM should be powered off.

What is RDM?
RDM is a sort form of Raw Device Mapping. It is a file stored in VMFS volume which acts as a proxy for a raw physical device. It allows you to store virtual machine data directly on LUN.

What is NFS?
NFS is a Network file system. It is a file sharing protocol which ESXI host used to communicate with the NAS device. It is a specialized store device which connects to a network and can provide file aces service to ESXI hosts.

https://career.guru99.com/top-15-vmware-interview-questions/



  • Explain what happens to a Virtual Machine after the host which it is running on, fails.


First, you should explain that the VM is forcefully powered off.
Next, ask if the host was in a correctly configured HA cluster (If it isn’t then nothing else happens to the VM)
If the Host is HA enabled, then ask what the Virtual Machine restart policy is. If it’s disabled then the VM will not be restarted on other hosts.
Ignoring the HA master election process, the simple answer is that the Virtual Machine will be rebooted on another ESXi host in the cluster.
Mention that there are things that will stop a Virtual Machine from being restarted on other hosts such as Admission Control settings & resource availability on the host.
The key thing to remember is that HA does NOT trigger a vMotion.

When should Promiscuous Mode be enabled on a Virtual Switch

Promiscuous Mode is a vSwitch and Portgroup setting that allows for Virtual Machines to receive all traffic within the same vSwitch or Portgoup (depending on where you set the configuration)
Typical use cases for this are packet sniffing applications.

Name 3 benefits of installing VMware Tools on Virtual Machines

Enables features such as Guest Introspection for NSX / agentless antivirus
Installs the VMXNET3 driver for improved network performance.
Allows the ability to copy and paste between the VM and desktop (some other settings might need to be enabled first)

What techniques are available to ESXi to reclaim memory?

Transparent Page Sharing (TPS) – Note that as of vSphere 6.0, this is disabled by default.
Ballooning – Requests VMware tools to “inflate a memory balloon” inside the VM until excess memory is released back to ESXi.
Memory Compression.
Swapping – This is the last option that ESXi will use to reclaim memory because it is the most disruptive to performance as memory gets swapped out from real memory onto disk

What is the impact of using Thick Eager Zeroed disk provisioning for VMDKs?
This disk type will zero out all data on the disk before allocating the VMDK to the Virtual Machine.
This has some performance benefit to the VM because it doesn’t have to zero a block before it can be written to.
The negative side is that it takes longer to provision the VMDK to the VM (since it has to zero all blocks first) and it has a measurable, sustained IO hit on the storage system

What are the main benefits of a Distributed Switch?
    Central Management of all ESXi host’s networking, meaning that there is only one switch to manage rather than one per host.
    The ability to enable Network IO Control (NIOC)
    NetFlow support.

Name 3 Virtual Machine Files

    VMX – The Virtual Machine configuration file
    NVRAM – The VM’s BIOS file
    VMEM – The VM’s pagefile
    VMSD – VM Snapshot state file


https://virtualg.uk/10-vmware-interview-questions-and-answers/

docker interview questions

What are the main drawbacks of Docker?
Some notable drawbacks of Docker are:
    Doesn't provide a storage option
    Offer a poor monitoring option.
    No automatic rescheduling of inactive Nodes
    Complicated automatic horizontal scaling set up

What is Docker Engine?
Docker daemon or Docker engine represents the server. The docker daemon and the clients should be run on the same or remote host, which can communicate through command-line client binary and full RESTful API

Docker Engine is supported by the following components:

  • Docker Engine REST API
  • Docker Command-Line Interface (CLI)
  • Docker Daemon
Explain the Docker components
Docker Client: This component executes build and run operations to communicate with the Docker Host.
Docker Host: This component holds the Docker Daemon, Docker images, and Docker containers. The daemon sets up a connection to the Docker Registry.
Docker Registry: This component stores Docker images. It can be a public registry, such as Docker Hub or Docker Cloud, or a private registry.

What is memory-swap flag?
Memory-swap is a modified flag that only has meaning if- memory is also set. Swap allows the container to write express memory requirements to disk when the container has exhausted all the RAM which is available to it.

Explain Docker Swarm?
Docker Swarm is native gathering for docker which helps you to a group of Docker hosts into a single and virtual docker host. It offers the standard docker application program interface.
Docker Swarm is native clustering for Docker.It turns a pool of Docker hosts into a single, virtual Docker host.
Docker Swarm is an open-source container orchestration tool that is integrated with the Docker engine and CLI. If you want to use Docker Swarm, you should use the overlay network driver. Using an overlay network enables the Swarm service by connecting multiple docker host daemons together.

What is Docker hub?
Docker hub is a cloud-based registry that which helps you to link to code repositories. It allows you to build, test, store your image in Docker cloud.

Explain Docker object labels
Docker object labels is a method for applying metadata to docker objects including, images, containers, volumes, network, swam nodes, and services.

You can use labels to organize your images, record licensing information, annotate relationships between containers, volumes, and networks, or in any way that makes sense for your business or application.

How can you run multiple containers using a single service?
By using docker-compose, you can run multiple containers using a single service. All docker-compose files uses yaml language.

Does Docker offer support for IPV6?
Yes, Docker provides support IPv6. IPv6 networking is supported only on Docker daemons runs on Linux hosts.

Can you lose data when the container exits?
No, any data that your application writes to disk get stored in container. The file system for the contain persists even after the container halts.




  • What is the use of the docker save and docker load commands?

A Docker image can be exported as an archive via the docker save command.
The exported Docker image can then be imported to another Docker host via the docker load command:

What is the default Docker network driver, and how can you change it when running a Docker image?
Docker provides different network drivers like bridge, host, overlay, and macvlan. bridge is the default.


What is a Docker image? What is a Docker image registry?
A Docker image consists of many layers. Each layer corresponds to a command in an image’s Dockerfile. This image provides isolation for an application when you run a Docker image as a container.
A Docker image registry is a storage area for Docker images. You can get images from them instead of building them.

What is a DockerFile?
Docker uses the instructions in the Dockerfile to automatically build images.

Is there any problem with just using the latest tag in a container orchestration environment? What is considered best practice for image tagging?
The problem is if you push a new image with just the latest tag, you lose your old image and your deployments will use the new image.

What is Docker Compose?
Docker Compose is a tool that lets you define multiple containers and their configurations via a YAML or JSON file.
Docker Compose is a YAML file which contains details about the services, networks, and volumes for setting up the Docker application. So, you can use Docker Compose to create separate containers, host them and get them to communicate with each other.
use a JSON file instead of a YAML file for the Docker Compose file. 

  • What is a Docker Container?

Docker containers include the application and all of its dependencies. It shares the kernel with other containers, running as isolated processes in user space on the host operating system.Docker containers are basically runtime instances of Docker images.
  • think of containers as runtime instances of Docker images.
  • use the underlying system’s CPU and memory to perform tasks.
  • any containerized application can run on any platform regardless of the underlying operating system
Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server.

Docker containers include the application and all of its dependencies. It shares the kernel with other containers, running as isolated processes in user space on the host operating system

  • What are Docker Images?
Docker images are used to create containers. When a user runs a Docker image, an instance of a container is created. These docker images can be deployed to any Docker environment.


  • Will you lose your data, when a docker container exists?
Any data that your application writes to the container gets preserved on the disk until you explicitly delete the container.
The file system for the container persists even after the container halts.

What is Docker Machine?
Docker machine is a tool that lets you install Docker Engine on virtual hosts.Docker machine also lets you provision Docker Swarm Clusters.

What’s the difference between virtualization and containerization?
Virtualization helps us run and host multiple operating systems on a single physical server. In virtualization, hypervisors give a virtual machine to the guest operating system. The VMs form an abstraction of the hardware layer so each VM on the host can act as a physical machine.
Containers form an abstraction of the application layer, so each container represents a different application.
Containerization provides us with an isolated environment for running our applications.

  • What is the functionality of a hypervisor?
A hypervisor, or virtual machine monitor, is software that helps us create and run virtual machines
Native: Native hypervisors, or bare-metal hypervisors, run directly on the underlying host system. It gives us direct access to the hardware of the host system and doesn’t require a base server operating system.
Hosted: Hosted hypervisors use the underlying host operating system.

A vCPU is a VM thread (see cpu in the “VM Configuration Reference” chapter). These vCPUs appear to a guest just like physical CPUs. A guest's scheduling algorithm can't know that when it is migrating execution between vCPUs it is switching threads, not physical CPUs
This switching between threads can degrade performance of all the guests and the overall system. This is especially common when VMs are configured with more vCPUs than there are physical CPUs on the hardware.
Specifically, if in the hypervisor host there are more threads (including vCPU threads) ready to run than there are physical CPUs available to run them, the hypervisor host scheduler must apply its priority and scheduling policies (round-robin, FIFO, etc.) to decide which threads to run. These scheduling policies may employ preemption and time slicing to manage threads competing for physical CPUs.
Every preemption requires a guest exit, context switch and restore, and a guest entrance (see “Guest exits”). Thus, inversely to what usually occurs with physical CPUs, reducing the number of vCPUs in a VM can improve overall performance: less threads will compete for time on the physical CPUs, so the hypervisor will not be obliged to preempt threads (with the attendant guest exits) as often. In brief, fewer vCPUs in a VM may sometimes yield the best performance.

Virtual CPU’s can be allocated to a virtual machine.  The amount of virtual processors available are determined by the number of cores available on the hardware. 

It is important not to allow a running container to consume too much of the host machine’s memory. On Linux hosts, if the kernel detects that there is not enough memory to perform important system functions, it throws an OOME, or Out Of Memory Exception, and starts killing processes to free up memory. Any process is subject to killing, including Docker and other important applications. This can effectively bring the entire system down if the wrong process is killed.

By default, Docker does not apply any CPU limitations. Containers can all of the hosts given CPU power.


Virtual machines are considered a suitable choice in a production environment, rather than Docker containers since they run on their own OS without being a threat to the host computer. But if the applications are to be tested then Docker is the choice to go for, as Docker provides different OS platforms for the thorough testing of the software or an application.

  • Sharing sockets with docker-compose
    Create common volume
    Connect the socket position of the container with the socket you want to refer to the common volume
    Mount the common volume on the referencing container
The following is an example of connecting a certain API server to MySQL.

a want to run a bunch of applications that inside containers (for security and management reasons), and these applications need to speak to a mysql server (via a unix domain socket – which just appears to be a file on the filesystem.
I also want to run the mysql server inside a container – so the mechanics of getting a socket shared between them are a little non-trivial.

A Unix domain socket or IPC socket (inter-process communication socket) is a data communications endpoint for exchanging data between processes executing on the same host operating system

/var/run/docker.sock is the Unix domain socket . Sockets are used in your favorite Linux distribution, allowing different processes to communicate with each other. Like everything in Unix, sockets are files. In Docker,/var/run/docker.sock is the way to communicate with the main Docker process. Because it is a file, we can share it with other containers.

When you start Docker and share the socket, you give the container permission to manipulate the Docker host. Your container can now start or stop other containers, drag in or create images on the Docker host, and even write to the host file system


X11 applications may fail due to failures in sharing sockets with containers created by the master container.  There seems to be no problem sharing sockets between the vnc container and the master, but when the master creates a container and names its volume, the socket is not functional.

Didn’t know that sockets could be mounted.
I’m starting the Jenkins container with the following command
Jenkins is running and “sees” a change in the repository
It thens tries to build and run a docker container by using the binded Docker socket.


 /var/run/docker.sock is a Unix domain socket. Sockets are used in your favorite Linux distro to allow different processes to communicate with one another. Like everything in Unix, sockets are files, too. In the case of Docker, /var/run/docker.sock is a way to communicate with the main Docker process and, because it's a file, we can share it with containers.

  • Differentiate between COPY and ADD commands that are used in a Dockerfile?
COPY provides just the basic support of copying local files into the container whereas ADD provides additional features like remote URL and tar extraction support

Can a container restart by itself?
it is possible only while using certain docker-defined policies while using the docker run command. 

Can you tell the differences between a docker Image and Layer?
Image: This is built up from a series of read-only layers of instructions. An image corresponds to the docker container and is used for speedy operation due to the caching mechanism of each step.
Layer: Each layer corresponds to an instruction of the image’s Dockerfile. In simple words, the layer is also an image but it is the image of the instructions run.

What is the purpose of the volume parameter in a docker run command?
docker run -v /data/app:usr/src/app myapp
mounts the directory  /data/app in the host to the usr/src/app directory.
The volume parameter is used for syncing a directory of a container with any of the host directories
sync the container with the data files from the host without having the need to restart it
ensures data security in cases of container deletion
even if the container is deleted, the data of the container exists in the volume mapped host location making it the easiest way to store the container data.

Where are docker volumes stored in docker?
Volumes are created and managed by Docker and cannot be accessed by non-docker entities. 
 
Can you differentiate between Daemon Logging and Container Logging?
Daemon Level: This kind of logging has four levels- Debug, Info, Error, and Fatal.
Container Level:
docker logs <container_id>

What is the best way of deleting a container?
- docker stop <container_id>
- docker rm <container_id>

  • Can you tell the difference between CMD and ENTRYPOINT?

CMD command provides executable defaults for an executing container.

ENTRYPOINT specifies that the instruction within it will always be run when the container starts.
This command provides an option to configure the parameters and the executables
If the DockerFile does not have this command, then it would still get inherited from the base image mentioned in the FROM instruction

  • Docker Layer Caching (DLC) can reduce Docker image build times on CircleCI.
Docker Layer Caching (DLC) is a great feature to use if building Docker images is a regular part of your CI/CD process. DLC will save image layers created within your jobs, rather than impact the actual container used to run your job.

DLC caches the individual layers of any Docker images built during your CircleCI jobs, and then reuses unchanged image layers on subsequent CircleCI runs, rather than rebuilding the entire image every time. In short, the less your Dockerfiles change from commit to commit, the faster your image-building steps will run.


As Docker is processing your Dockerfile to determine whether a particular image layer is already cached it looks at two things: the instruction being executed and the parent image.
Docker will scan all of the children of the parent image and looks for one whose command matches the current instruction. If a match is found, docker skips to the next instruction and repeats the process.
If a matching image is not found in the cache, a new image is created
Since the cache relies on both the instruction being executed and the image generated from the previous instruction it should come as no surprise that changing any instruction in the Dockerfile will invalidate the cache for all of the instructions that follow it. Invalidating an image also invalidates all the children of that image.




How to reduce the size of Docker Images
    Use a .dockerignore file to remove unnecessary content from the build context
    Try to avoid installing unnecessary packages and dependencies
    Keep the layers in the image to a minimum
    Use alpine images wherever possible
    Use Multi-Stage Builds, which I am going to talk about in this article.


The multi-stage build is the dividing of Dockerfile into multiple stages to pass the required artifact from one stage to another and eventually deliver the final artifact in the last stage.

Previously, when we didn’t have the multi-stage builds feature, it was very difficult to minimize the image size. We used to clean up every artifact (which isn’t required) before moving to the next instruction as every instruction in Dockerfile adds the layer to the image. We also used to write bash/shell scripts and apply hacks to remove the unnecessary artifacts.

https://blog.logrocket.com/reduce-docker-image-sizes-using-multi-stage-builds/
https://circleci.com/docs/2.0/docker-layer-caching/
https://www.ctl.io/developers/blog/post/caching-docker-images
https://www.edureka.co/blog/interview-questions/docker-interview-questions/#DockerAdvancedQuestions
https://www.toptal.com/docker/interview-questions
https://www.guru99.com/docker-interview-questions.html
https://www.educative.io/blog/top-40-docker-interview-questions
https://www.interviewbit.com/docker-interview-questions/
https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/application/nginx-app.yaml
https://www.ctl.io/developers/blog/post/tutorial-understanding-the-security-risks-of-running-docker-containers
https://forums.docker.com/t/using-docker-in-a-dockerized-jenkins-container/322/9
https://nps.edu/web/c3o/support1
https://blog.fearcat.in/a?ID=01000-18e50b57-7ac9-4466-83ce-e3904cca07bc
https://en.wikipedia.org/wiki/Unix_domain_socket
http://bobtfish.github.io/blog/2013/10/06/read-only-bind-mounts-and-docker/
https://titanwolf.org/Network/Articles/Article?AID=33d13422-4d43-4955-9610-c0461ea53678
https://cloudacademy.com/blog/docker-vs-virtual-machines-differences-you-should-know/
https://docs.docker.com/config/containers/resource_constraints/
https://www.fastvue.co/tmgreporter/blog/understanding-hyper-v-cpu-usage-physical-and-virtual/
https://www.qnx.com/developers/docs/7.0.0/#com.qnx.doc.hypervisor.user/topic/perform/vcpu.html
https://stackoverflow.com/questions/41582969/how-does-docker-images-and-layers-work
https://www.edureka.co/blog/interview-questions/docker-interview-questions/



Sunday, February 23, 2020

SSL termination SSL offloading SSL acceleration


  • SSL termination refers to the process of decrypting encrypted traffic before passing it along to a web server.

What is SSL termination?
decrypting all that encrypted traffic takes a lot of computational power—and the more encrypted pages your server needs to decrypt, the larger the burden.
SSL termination (or SSL offloading) is the process of decrypting this encrypted traffic. Instead of relying upon the web server to do this computationally intensive work, you can use SSL termination to reduce the load on your servers, speed up the process, and allow the web server to focus on its core responsibility of delivering web content.
https://www.f5.com/services/resources/glossary/ssl-termination


  • SSL acceleration refers to off-loading processor-intensive SSL encryption and decryption from a server to a device configured to accelerate the SSL encryption/decryption routine.

https://www.f5.com/services/resources/glossary/ssl-acceleration


  • SSL offloading is the process of removing the SSL-based encryption from incoming traffic to relieve a web server of the processing burden of decrypting and/or encrypting traffic sent via SSL. The processing is offloaded to a separate device designed specifically for SSL acceleration or SSL termination.

SSL termination is particularly useful when used with clusters of SSL VPNs, because it greatly increases the number of connections a cluster can handle.
https://www.f5.com/services/resources/glossary/ssl-offloading

Wednesday, February 19, 2020

jump box


  • A jump box is a system set up with multi-factor authentication (MFA) usually placed in a network DMZ with very restricted access to the corporate network and no returning Internet access for any protocol. In other words, the jump box has only one path in via SSH ,and no other protocols are allowed outbound to the Internet or into the corporate network

Since the jump box resides in the DMZ or another network that can be accessed via the Internet, great care should be taken to ensure its security by applying patches and updates as soon as they are made available. Additionally, the jump box shouldn’t host any protocols except for SSHD. The jump box has a single purpose as an SSH gateway into the corporate network. The only exception is for MFA purposes. Some MFA solutions require Internet access or at least some method of communicating with an authentication service inside the network. Time-based solutions are more secure, but any MFA solution is more secure than simple passwords alone.
No accounts on the jump box system should be accessible without using MFA unless it is a console login. The most secure type of MFA is to require that each user have a physical token such as a hardware token, which is a device that generates random numbers or alphanumeric sequences.
Additional Security

To further secure your jump servers, you should follow these suggestions:

    Disable or remove unnecessary protocols, daemons, and services.
    Never store SSH private keys on the jump server.
    Configure internal hosts with /etc/hosts.allow and /etc/hosts.deny files to control access.
    Create at least one secondary /backup jump box in case of failure.
    Use a restrictive, host-based firewall for all Linux systems.
    Set up a service such as Fail2Ban to resist brute-force attacks.
    Install a minimal distribution option.
    Set up NAT forwarding to your jump box.
VMs as Jump Boxes
A quick Internet search for “jump box” yields quite a few results for deploying jump boxes for Amazon Web Service (AWS) environments.
An additional layer of security is to limit the amount of time the jump box is available for use.
Summary
A jump box’s sole purpose is provide an SSH gateway into your internal network for administrators, and it should be made as secure as possible.
http://www.linux-magazine.com/Online/Features/Jump-Box-Security

Bypassing WAF


  • Bypassing WAF: SQL Injection - Normalization Method

Using HTTP Parameter Pollution (HPP)
ByPassing WAF: SQL Injection – HPF Using HTTP Parameter Fragmentation (HPF)
Bypassing WAF: Blind SQL Injection Using logical requests AND/OR
https://owasp.org/www-community/attacks/SQL_Injection_Bypassing_WAF

Data Masking


  • What is Data Masking?

Data masking, sometimes called data obfuscation is the process of hiding original data using modified content.
The main reason why data masking is used is to hide sensitive data (personal data) stored in proprietary databases
However, when masking data one shouldn’t forget that this data has to remain usable for other corporate activities, for example, for testing and (further) application development. Data masking is a very useful tool when a company needs to give access to its database(s) to outsource and third-party IT companies
Another situation where data masking may come in very handy is to mitigate operators’ errors
Companies usually trust their employees to make good and secure decisions, however many breaches are a result of operators’ errors.
Data masking can be done either statically or dynamically. As the name suggests, when masking data statically database administrators need to create a copy of the original data and keep it somewhere safe and replace it with a fake set of data.
When masking data dynamically, data is obfuscated on the go as an unauthorized database user will be trying to retrieve the data not intended for that user. Real-time masking also means that data never leaves the production database and, as a result, is less susceptible to security threats.
https://www.datasunrise.com/blog/professional-info/what-is-data-masking/

Tuesday, February 18, 2020

table-top exercise (TTX / TTE)

Tabletop Exercises: SixScenarios to Help Prepare Your Cybersecurity Team2

Exercise 1 The Quick Fix
Processes tested: Patch Management
Threat actor: Insider
Asset impacted: Internal Network
Applicable CIS Controls™:
CIS Control 2: Inventory and Control of Software Assets,
CIS Control 5: Secure Configuration for Hardware and Software on Mobile Devices, Laptops, Workstations and Servers,
CIS Control 6: Maintenance, Monitoring, and Analysis of Audit Logs


Exercise 2
A Malware Infection
Processes tested: Detection ability/User awareness
Threat actor: Accidental insider
Asset impacted: Network integrity
Applicable CIS Controls:
CIS Control 8: Malware Defenses,
CIS Control 9: Limitation and Control of Network Ports, Protocols, and Services,
CIS Control 12: Boundary Defense

Exercise 3
The Unplanned Attack
Processes tested: Preparation
Threat actor: Hacktivist
Asset impacted: Unknown
Applicable CIS Controls:
CIS Control 8: Malware Defenses,
CIS Control 12: Boundary Defense,
CIS Control 17: Implement a Security Awareness and Training Program,
CIS Control 19: Incident Response and Management


Exercise 4
sThe Cloud Compromise
Processes tested: Incident response
Threat actor: External threat
Asset impacted: Cloud
ApplicableCIS Controls:
CIS Control 10: Data Recovery Capabilities,
CIS Control 13: Data Protection,
CIS Control 19: Incident Response and Manageme

Exercise 5
Financial Break-in
Processes tested: Incident Response
Threat actor: External Threat
Asset impacted: HR/Financial data
Applicable CIS Controls:
CIS Control 4: Controlled Use of Administrative Privileges,
CIS Control 16: Account Monitoring and Control,
CIS Control 19: Incident Response and Management

Exercise 6
The Flood Zone
Processes tested: Emergency response
Threat actor: External threat
Asset impacted: Emergency Operations Center Processes
Applicable CIS Controls:
CISControl 7: Email and Web Browser Protections,
CIS Control 19: Incident Response and Management

https://www.cisecurity.org/wp-content/uploads/2018/10/Six-tabletop-exercises-FINAL.pdf


  • Running an Effective Incident Response Tabletop Exercise

Are you ready for an incident? Are you confident that your team knows the procedures, and that the procedures are actually useful? An incident response tabletop exercise is an excellent way to answer these questions.
I've outlined some steps to help ensure success for your scenario-based threat simulations.
First, identify your audience. This will help inform which type of exercise you'll want to run. Will it be an executive exercise or technical in nature?
Now that your scope and audience have been set, it is time to define your scenario.
Use the maturity of your organization's incident response (IR) capabilities and the threats to your business to help guide the selection of a scenario for the exercise
You must set a realistic scenario that truly exercises your organization.
For instance, a defense contractor will not have much need to practice a case of adware infection on a handful of machines, and a restaurant will not greatly benefit from preparing for a nation-state threat
Now that you have fully prepared, the steps that remain are executing the exercise and reporting the results.
we like to look at clients' incident response plans, their adherence to those plans, coordination among IR teams, communications (internal and external), and technical analysis.


The purpose of the TTX was to practice incident response procedures related to Information Security in order to identify potential weaknesses in people, process, and technology

https://blog.rapid7.com/2017/07/05/running-an-effective-tabletop-exercise/