Wednesday, December 30, 2020

Sistem odası

  •  Bir Sistem odasınde bağıl nem ile ilgili olarak iki olası tehlike bulunur:

1. Elektrostatik deşarj: Elektrostatik deşarj olasılığı, nem çok düşük olduğunda gerçekleşir. Ayrıca, bu olasılık sıcaklık düşük olduğunda da artar. Elektrostatik deşarj insanlar tarafından oldukça zor fark edilebilir ve genelde yaralanmalara yol açmaz. Ancak, 10 Volt değerindeki bir deşarj donanıma hasar verebilir.

2. Korozyon: Bu durum metalik bir donanım, donanım ıslandığında veya havadaki su yoğuşmasının sonucu olarak küçük damlalar oluştuğunda veya suya maruz kaldığında oluşur.

Örneğin: Yüksek nem bulunan bir ortamda, sunucuların içerisindeki elemanlar hasar görebilir ve veri kaybı yaşanabilir. Buradaki ana nokta, yoğuşma ve elektrostatik deşarjın engellendiği bir ortamda, nemin optimum bir aralıkta tutulacak şekilde dengelenmesidir. Bunun için, en uygun bağıl nem aralığı %40 ile %55 arasıdır. 

(Bu değer aynı zamanda TIA/EIA 942 standardıyla önerilir)

https://hassasklima.com/bilgi-bankasi/tag/sistem%20odas%C4%B1%20nem%20oran%C4%B1%20ne%20olmal%C4%B1.html


  •  Havadaki nem oranının aşırı yükselmesi veya düşmesi bilgisayar sistemlerine zarar verebilir. Nemin çok az olması, statik elektrik yüklenme ve aktarımlarını arttırarak manyetik teyp ve benzeri bileşenlerde sorunlara yol açabilir.  Nem oranının çok yükselmesi ise, bilgisayar sistemlerinizin devrelerinde kısa devrelere  kadar uzanan sorunlara neden olabilir.  Söz konusu olumsuzlukların önlenebilmesi için yüksek güvenilirlik gerektiren ortamlarda nem düzeyi belirli sınırlar arasında tutulmalı ve  nem düzeyinin belirlenen sınırlar dışına çıkması durumunda alarm sistemleri devreye girmelidir.

 https://sistemnetwork.karabuk.edu.tr/sistem_odasi/sistem_odasi.html

SQL Injection Vulnerability

  •  SQL Injection 


Generally, the purpose of SQL injection is to convince the application to run SQL code that was not intended.

SQL injection occurs when an application processes user-provided data to create a SQL statement without first validating the input.


During a web application SQL injection attack,

The malicious code is inserted into a web form field

Or the website’s code to make a system execute a command shell

Or other arbitrary commands.


divide data into packets, each sent individually

If multiple routes are available between two points on a network, packet switching can choose the best one and fall back to secondary routes in case of failure

Packets may take any path across a network and are reassembled by the receiving node. 

Missing packets can be retransmitted, and out of-order packets can be resequenced.


Finding a SQL Injection Vulnerability 


2.Test the SQL Server using single quotes (‘’). Doing so indicates whether the user input variable is sanitized or interpreted literally by the server.


If the server responds with an error message that says use 'a'='a' (or something similar), then it’s most likely susceptible to a SQL injection attack.


Use the SELECT command to retrieve data from the database or the INSERT command to add information to the database.


Here are some examples of variable field text you can use on a web form to test for SQL vulnerabilities:

Blah’ or 1=1--

Login:blah’ or 1=1--

Password::blah’ or 1=1--

http://search/index.asp?id=blah’ or 1=1--








These commands and similar variations may allow a user to bypass a login depending on the structure of the database.

When entered in a form field, the commands may return many rows in a table or even an entire database table because the SQL Server is interpreting the terms literally.

The double dashes near the end of the command tell SQL to ignore the rest of the command as a comment


Here are some examples of how to use SQL commands to take control:

To get a directory listing, type the following in a form field:

Blah‘;exec master..xp_cmdshell “dir c:\*.* /s >c:\directory.txt”--

To create a file, type the following in a form field:

Blah‘;exec master..xp_cmdshell “echo hacker-was-here > c:\hacker.txt”--

To ping an IP address, type the following in a form field:

Blah‘;exec master..xp_cmdshell “ping 192.168.1.1”--


The Purpose of SQL Injection


SQL injection attacks are used by hackers to achieve certain results. Some SQL exploits will produce valuable user data stored in the database, and some are just precursors to other attacks.


Identifying SQL Injection Vulnerability

The purpose is to probe a web application to discover which parameters and user input fields are vulnerable to SQL injection.


Performing Database Finger-Printing

The purpose is to discover the type and version of database that a web application is using and “fingerprint” the database.

Knowing the type and version of the database used by a web application allows an attacker to craft database specific attacks.


Adding or Modifying Data

The purpose is to add or change information in a database.


Performing Denial of Service

These attacks are performed to shut down access to a web application, thus denying service to other users.

Attacks involving locking or dropping database tables also fall under this category.


Evading Detection

This category refers to certain attack techniques that are employed to avoid auditing and detection.


Bypassing Authentication

The purpose is to allow the attacker to bypass database and application authentication mechanisms.

Bypassing such mechanisms could allow the attacker to assume the rights and privileges associated with another application user.


Executing Remote Commands

These types of attacks attempt to execute arbitrary commands on the database. These commands can be stored procedures or functions available to database users.


Performing Privilege Escalation

These attacks take advantage of implementation errors or logical flaws in the database in order to escalate the privileges of the attacker.


SQL Injection Using Dynamic Strings 


static SQL statements

Many functions of a SQL database receive static user input where the only variable is the user input fields.

Such statements do not change from execution to execution.

They are commonly called static SQL statements


dynamic SQL statements

Some programs must build and process a variety of SQL statements at runtime.

In many cases the full text of the statement is unknown until application execution.

Such statements can, and probably will, change from execution to execution.

So, they are called dynamic SQL statements.


Dynamic SQL is an enhanced form of SQL that, unlike standard SQL, facilitates the automatic generation and execution of program statements.

Dynamic SQL is a term used to mean SQL code that is generated by the web application before it is executed.

Dynamic SQL is a flexible and powerful tool for creating SQL strings.


It can be helpful when you find it necessary to write code that can adjust to varying databases, conditions, or servers.

Dynamic SQL also makes it easier to automate tasks that are repeated many times in a web application.

A hacker can attack a web-based authentication form using SQL injection through the use of dynamic strings.


For example, the underlying code for a web authentication form on a web server may look like the following:

SQLCommand = “SELECT Username FROM Users WHERE Username = ‘“


SQLCommand = SQLComand & strUsername


SQLCommand = SQLComand & “‘ AND Password = ‘“


SQLCommand = SQLComand & strPassword


SQLCommand = SQLComand & “‘“


strAuthCheck = GetQueryResult(SQLQuery)



A hacker can exploit the SQL injection vulnerability by entering a login and password in the web form that uses the following variables:

Username: kimberly

Password: graves’ OR ‘’=’



The SQL application would build a command string from this input as follows:

SELECT Username FROM Users

WHERE Username = ‘kimberly’

AND Password = ‘graves’ OR ‘’=’’


This query will return all rows from the user’s database, regardless of whether kimberly is a real username in the database or graves is a legitimate password.

This is due to the OR statement appended to the WHERE clause.

The comparison ‘’=’’ will always return a true result, making the overall WHERE clause evaluate to true for all rows in the table.

This will enable the hacker to log in with any username and password



SQL Injection Countermeasures


The cause of SQL injection vulnerabilities is relatively simple and well understood:

Insufficient validation of user input.

To address this problem, defensive coding practices, such as encoding user input and validation, can be used when programming applications.

It is a laborious and time-consuming process to check all applications for SQL injection vulnerabilities.


When implementing SQL injection countermeasures, review source code for the following programming weaknesses:

Single quotes

Lack of input validation

The first countermeasures for preventing a SQL injection attack are

Minimizing the privileges of a user’s connection to the database and

Enforcing strong passwords for SA and Administrator accounts.


You should also disable verbose or explanatory error messages so no more information than necessary is sent to the hacker;

Such information could help them determine whether the SQL Server is vulnerable


Another countermeasure for preventing SQL injection is checking user data input and validating the data prior to sending the input to the application for processing.

Some countermeasures to SQL injection are

Rejecting known bad input

Sanitizing and validating the input field


https://sct.emu.edu.tr/en/Documents/System%20Security.ppt

Tuesday, December 29, 2020

self-garbling virus

  •   A self-garbling virus attempts to hide from anti-virus software by garbling its own code. When these viruses spread, they change the way they are encoded so anti-virus software cannot find them. A small portion of the virus code decodes the garbled code when activated

https://infosecurity.txstate.edu/awareness/Cyber-Threats/Viruses/virus_defn.html

Thursday, December 24, 2020

Clickjacking

  • Clickjacking

Clickjacking, also known as a “UI redress attack”, is when an attacker uses multiple transparent or opaque layers to trick a user into clicking on a button or link on another page when they were intending to click on the top level page. Thus, the attacker is “hijacking” clicks meant for their page and routing them to another page, most likely owned by another application, domain, or both.

Using a similar technique, keystrokes can also be hijacked. With a carefully crafted combination of stylesheets, iframes, and text boxes, a user can be led to believe they are typing in the password to their email or bank account, but are instead typing into an invisible frame controlled by the attacker

Examples
For example, imagine an attacker who builds a web site that has a button on it that says “click here for a free iPod”. However, on top of that web page, the attacker has loaded an iframe with your mail account, and lined up exactly the “delete all messages” button directly on top of the “free iPod” button. The victim tries to click on the “free iPod” button but instead actually clicked on the invisible “delete all messages” button. In essence, the attacker has “hijacked” the user’s click, hence the name “Clickjacking”.

Defending against Clickjacking
There are two main ways to prevent clickjacking:

Sending the proper Content Security Policy (CSP) frame-ancestors directive response headers that instruct the browser to not allow framing from other domains. (This replaces the older X-Frame-Options HTTP headers.)
Employing defensive code in the UI to ensure that the current frame is the most top level window

https://owasp.org/www-community/attacks/Clickjacking#:~:text=Clickjacking%2C%20also%20known%20as%20a,on%20the%20top%20level%20page.

  1. Clickjacking categories

Classic: works mostly through a web browser
Likejacking: utilizes Facebook's social media capabilities
Nested: clickjacking tailored to affect Google+
Cursorjacking: manipulates the cursor's appearance and location
MouseJacking: inject keyboard or mouse input via remote RF link
Browserless: does not use a browser
Cookiejacking: acquires cookies from browsers
Filejacking: capable of setting up the affected device as a file server
Password manager attack: clickjacking that utilizes a vulnerability in the autofill capability of browsers

Prevention

Client-side
NoScript
NoClickjack
GuardedID
Gazelle
Intersection Observer v2

Server-side
Framekiller
X-Frame-Options
Content Security Policy

https://en.wikipedia.org/wiki/Clickjacking

Tuesday, October 13, 2020

HPC fabric topology

  •  ExascaleHPC Fabric Topology


Topologies –Fat Tree Example


Torus Topology

Mesh or 3DTorus

▪Mesh –each node is connected to 4 other nodes: positive and negative X and Y axis

▪3Dmesh –Each node is connected to 6 other nodes: positive and negative X, Y and Z axis

▪2D/3Dtorus –The ends of the 2D/3Dmeshes are connected


Dragonfly+ Topology

What is Dragonfly Topology?

Dragonfly is a hierarchical topology with the following properties:

Several “groups”, connected together using all to all links▪The topology inside each group can be any topology

▪Focus on reducing the number of long links and network diameter to reduce total cost of network

▪Requires Adaptive Routing to enable efficient operation


There are Different Dragonfly Topologies Options


Dragonfly+ Topology

▪Several “groups”, connected using all to all links

▪The topology inside each group can be any topology

▪Reduce total cost of network (fewer long cables)

▪Utilizes Adaptive Routing to for efficient operations ▪Simplifies future system expansion 


Future Expansion of Dragonfly+ Based System

▪Dragonfly+ is the only topology that allows system expansion at zero cost

▪While maintaining bisection bandwidth

▪No port reservation

▪No re-cabling


Dragonfly+ Simplifies Scale Deployment and Cost


http://www.hpcadvisorycouncil.com/events/2019/APAC-AI-HPC/uploads/2018/07/Exascale-HPC-Fabric-Topology.pdf





Wednesday, July 8, 2020

Courses of Action Matrix


  • How to Defend With the Courses of Action Matrix and Indicator Lifecycle Management

The seven phases of the kill chain cover all of the stages of a single intrusion that — when completed successfully — lead to a compromise.
Within each of these stages is also an opportunity for defenders to prevent a successful intrusion. 
The weaponization phase, for example, can reveal document metadata or the characteristics of the tools that are used by the attackers. The delivery phase, in turn, can tell you which email infrastructure is used or which web infrastructure has been set up for delivering a browser plugin exploit. 

The information that results from analyzing these phases will include, among other things, IoCs. These indicators describe your adversaries by providing details about the infrastructure they use, fingerprints of their actions and the tactics, techniques and procedures (TTPs) used to attack their victims.


How to Apply the Courses of Action Matrix
The indicators extracted when you analyze the different phases of the Cyber Kill Chain should be put into action to increase your defenses. There are essentially two significant categories of action: passive and active.
This categorization of actions is described in another model from Lockheed Martin: the courses of action matrix. 

passive actions: 
Discover: security information and event management (SIEM) or stored network data.The goal is to determine whether you have seen a specific indicator in the past.
Detect: These actions are most often executed via an intrusion detection system (IDS) or a specific logging rule on your firewall or application. It can also be configured as an alert in a SIEM when a specific condition is triggered.

It’s important to note that these actions are mutually exclusive, and only one can be applied at a time. 

active actions
Deny:Common examples include a firewall block or a proxy filter.
Disrupt: Examples include quarantining or memory protection measures.
Degrade:Degrading will not immediately fail an event, but it will slow down the further actions of the attacker. Throttling bandwidth is one way to degrade an intrusion.
Deceive: One way to do this is to put a honeypot in place and redirect the traffic, based on an indicator, towards the honeypot
Destroy:The destroy action is rarely for “usual” defenders, as this is an offensive action against the attacker. These actions, including physical destructive actions and arresting the attackers, are usually left to law enforcement agencie

https://securityintelligence.com/how-to-defend-with-the-courses-of-action-matrix-and-indicator-lifecycle-management/

Monday, June 29, 2020

API hooking


  • API hooking is a technique by which we can instrument and modify the behavior and flow of API calls. API hooking can be done using various methods on Windows. Techniques include memory break point and .DEP and JMP instruction insertion

https://resources.infosecinstitute.com/api-hooking/#gref

Saturday, June 27, 2020

Scaffolding


  • Scaffolding, as used in computing, refers to one of two techniques: The first is a code generation technique related to database access in some model–view–controller frameworks; the second is a project generation technique supported by various tools. 


Code generation
Scaffolding is a technique supported by some model–view–controller frameworks, in which the programmer can specify how the application database may be used. The compiler or framework uses this specification, together with pre-defined code templates, to generate the final code that the application can use to create, read, update and delete database entries, effectively treating the templates as a "scaffold" on which to build a more powerful application.

Project generation
Complicated software projects often share certain conventions on project structure and requirements. For example, they often have separate folders for source code, binaries and code tests, as well as files containing license agreements, release notes and contact information
https://en.wikipedia.org/wiki/Scaffold_(programming)

Thursday, June 25, 2020

Return-oriented programming


  • Return-oriented programming

Return-oriented programming (ROP) is a computer security exploit technique that allows an attacker to execute code in the presence of security defenses such as executable space protection and code signing.
In this technique, an attacker gains control of the call stack to hijack program control flow and then executes carefully chosen machine instruction sequences that are already present in the machine's memory, called "gadgets".Each gadget typically ends in a return instruction and is located in a subroutine within the existing program and/or shared library code. Chained together, these gadgets allow an attacker to perform arbitrary operations on a machine employing defenses that thwart simpler attacks.
https://en.wikipedia.org/wiki/Return-oriented_programming


  • Executable space protection

In computer security, executable-space protection marks memory regions as non-executable, such that an attempt to execute machine code in these regions will cause an exception. It makes use of hardware features such as the NX bit (no-execute bit), or in some cases software emulation of those features. However technologies that somehow emulate or supply an NX bit will usually impose a measurable overhead; while using a hardware-supplied NX bit imposes no measurable overhead.
https://en.wikipedia.org/wiki/Executable_space_protection


  • Code signing

Code signing is the process of digitally signing executables and scripts to confirm the software author and guarantee that the code has not been altered or corrupted since it was signed. The process employs the use of a cryptographic hash to validate authenticity and integrity.
The efficacy of code signing as an authentication mechanism for software depends on the security of underpinning signing keys. As with other public key infrastructure (PKI) technologies, the integrity of the system relies on publishers securing their private keys against unauthorized access. Keys stored in software on general-purpose computers are susceptible to compromise. Therefore, it is more secure, and best practice, to store keys in secure, tamper-proof, cryptographic hardware devices known as hardware security modules or HSMs
https://en.wikipedia.org/wiki/Code_signing

Side-channel attack


  • Side-channel attack

In computer security, a side-channel attack is any attack based on information gained from the implementation of a computer system, rather than weaknesses in the implemented algorithm itself (e.g. cryptanalysis and software bugs). Timing information, power consumption, electromagnetic leaks or even sound can provide an extra source of information, which can be exploited.
General classes of side channel attack include:

    Cache attack — attacks based on attacker's ability to monitor cache accesses made by the victim in a shared physical system as in virtualized environment or a type of cloud service.
    Timing attack — attacks based on measuring how much time various computations (such as, say, comparing an attacker's given password with the victim's unknown one) take to perform.
    Power-monitoring attack — attacks that make use of varying power consumption by the hardware during computation.
    Electromagnetic attack — attacks based on leaked electromagnetic radiation, which can directly provide plaintexts and other information. Such measurements can be used to infer cryptographic keys using techniques equivalent to those in power analysis or can be used in non-cryptographic attacks, e.g. TEMPEST (aka van Eck phreaking or radiation monitoring) attacks.
    Acoustic cryptanalysis — attacks that exploit sound produced during a computation (rather like power analysis).
    Differential fault analysis — in which secrets are discovered by introducing faults in a computation.
    Data remanence — in which sensitive data are read after supposedly having been deleted. (i.e. Cold boot attack)
    Software-initiated fault attacks — Currently a rare class of side-channels, Row hammer is an example in which off-limits memory can be changed by accessing adjacent memory too often (causing state retention loss).
    Optical - in which visual recording can read secrets and sensitive data using a high resolution camera, or other devices that have such capabilities (see examples below)
https://en.wikipedia.org/wiki/Side-channel_attack

Functional Non-functional testing (cloud)








  • The main difference between functional and nonfunctional testing

Functional requirements: describe the behavior/execution of the software system
Non-functional requirements: describe the performance or usability of the software system

Here are some of the common functional testing techniques:

Installation testing – for desktop or mobile application, testing proper installation
Boundary value analysis – testing of the boundaries of numerical inputs
Equivalence partitioning – grouping tests together to reduce overlap of similar functional tests
Error guessing – assessing where functional issues are most likely to be found and testing these more extensively than other areas
Unit testing – testing performed at the smallest level of the software—not how the system is functioning as a whole, but whether each unit is executing properly
API testing – checks that internal and external APIs are functioning properly, including data transfer and authorization
Regression testing – tests that are performed to verify that new software changes did not have adverse effects on existing functionality (most common automation technique)


These are the chief nonfunctional testing techniques:

Load testing – tests performed on simulated environment to test the behavior of the system during expected conditions (various number of users)
Stress testing – testing performance when low on resources, such as server issues or lack of hard drive space on a device
Scalability testing – checking a system’s ability to scale with increased usage and to what extent performance is affected
Volume testing – testing performance with a high volume of data, not necessarily high number of users, but could be one user performing a high-volume task, such as a multiple-file upload
Security testing – tests performed to uncover how vulnerable the system is to attacks, and how well data is protected
Disaster recovery testing – checks on how quickly a system can recover following a crash or major issue
Compliance testing – tests of the software system against any set of standards (whether due to industry regulations or a company’s set of standards)
Usability testing – testing whether the GUI is consistent and if the application as a whole is intuitive and easy to use

https://testlio.com/blog/whats-difference-functional-nonfunctional-testing/#:~:text=Functional%20requirements%20are%20the%20WHAT,customer%20expectations%20are%20being%20met.

  • Differences between Functional and Non-functional Testing


Functional Testing:
Functional testing is a type of software testing in which the system is tested against the functional requirements and specifications. Functional testing ensures that the requirements or specifications are properly satisfied by the application. This type of testing is particularly concerned with the result of processing. It focuses on simulation of actual system usage but does not develop any system structure assumptions.
Non-functional Testing:
Non-functional testing is a type of software testing that is performed to verify the non-functional requirements of the application. It verifies whether the behavior of the system is as per the requirement or not. It tests all the aspects which are not tested in functional testing.
https://www.geeksforgeeks.org/differences-between-functional-and-non-functional-testing/
  • The implementation under test is that which implements the base standard(s) being tested.
https://link.springer.com/referenceworkentry/10.1007%2F978-0-387-73003-5_700

Implementation under test, a term used in technological vulnerability analysis, particularly protocol evaluation
https://en.wikipedia.org/wiki/IUT

Implementation Under Test (IUT) That part of a real system which is to be tested, which should be an implementation of applications, services or protocols.
https://portal.etsi.org/CTI/CTISupport/Glossary.htm

  • The main difference is that regression testing is designed to test for bugs you don't expect to be there, whereas retesting is designed to test for bugs you do expect to be there.
The point of regression testing is to ensure that new updates or features added to software don’t break any previously released updates or features.
To perform regression testing you typically have a regression suite – a series of test cases set up to test these older features.
Regression test cases are often automated because these tests build up as the software changes or grows

retesting is designed to test specific defects that you’ve already detected (typically during your regression testing).
In other words, regression testing is about searching for defects, whereas retesting is about fixing specific defects that you’ve already found.
https://www.leapwork.com/blog/difference-between-retesting-and-regression-testing

Regression testing is performed for passed test cases while Retesting is done only for failed test cases
Regression testing checks for unexpected side-effects while Re-testing makes sure that the original fault has been corrected.
Regression Testing doesn’t include defect verification whereas Re-testing includes defect verification.
Regression testing is known as generic testing whereas Re-testing is planned testing.
Regression Testing is possible with the use of automation whereas Re-testing is not possible with automation.
https://www.guru99.com/re-testing-vs-regression-testing.html

  • Examples of Functional testing are
Black Box testing
https://www.guru99.com/functional-testing.html

Black-box testing. Contrary to white-box testing, black-box testing involves testing against a system where the internal code, paths and infrastructure are not visible. Thus, testers use this method to validate expected outputs against specific inputs. 
https://www.applause.com/blog/functional-testing-types-examples

  • It helps detect defects in the software.
Testing is done by executing the program. 
Dynamic testing involves both functional and non-functional testing.
https://www.professionalqa.com/dynamic-testing
  •  Ensures the functionality, reliability, and performance between the integrated module.
 To be precise the success of Integration Testing lies in the perfection of the test plan.
 https://www.professionalqa.com/integration-testing

Wednesday, June 24, 2020

ITIL v4

From v3 to 4 – This is the new ITIL

The key elements of ITIL 4 are the four dimensions, the guiding principles, the move from processes to practices, and the Service Value System, providing a holistic approach to the co-creation of value through service relationships.

Service value system
The service value system (SVS) is a key component of ITIL 4, which facilitates value co-creation. It describes how all the components and activities of an organization work together to enable value creation. As the SVS has interfa
ces with other organizations it forms an ecosystem and can also create value for those organizations, their customers and stakeholders.
At the heart of the SVS is the service value chain – a flexible operating model for the creation, delivery and continual improvement of services. The service value chain defines six key activities: plan; improve; engage; design and transition; obtain/build; and deliver and support. They can be combined in many different sequences, which means the service value chain allows an organization to define a number of variants of value streams, e.g. the v3 service lifecycle.

The four dimensions
A holistic approach to service management is key in ITIL 4.
The four dimensions are:

    Organizations and people: An organization needs a culture that supports its objectives, and the right level of capacity and competency among its workforce.
    Information and technology: In the SVS context, this includes the information and knowledge as well as the technologies required for the management of services.
    Partners and suppliers: This refers to an organization’s relationships with those other businesses that are involved in the design, deployment, delivery, support, and continual improvement of services.
    Value streams and processes: How the various parts of the organization work in an integrated and coordinated way is important to enable value creation through products and services
   
Guiding principles
ITIL 4 has seven guiding principles.
The ITIL 4 guiding principles are:

    Focus on value
    Start where you are
    Progress iteratively with feedback
    Collaborate and promote visibility
    Think and work holistically
    Keep it simple and practical
    Optimize and automate

ITIL 4’s focus on collaboration, automation, and keeping things simple, reflect principles found in Agile, DevOps and Lean methodologies.

From processes to practices
ITIL has so far used “processes” to manage IT services.
This is known as “practices”, a fundamental part of the ITIL 4 framework. The SVS includes 34 management practices, which are sets of organizational resources for performing work or accomplishing an objective.
The ITIL practices share the same value and importance as the current ITIL processes but follow a more holistic approach.

The holistic approach
ITIL 4 puts service management in a strategic context. It looks at ITSM, Development, Operations, business relationships and governance holistically and brings the different functions together.
https://www.axelos.com/news/blogs/february-2019/from-v3-to-4-this-is-the-new-itil


  • Machine learning is the key to AI adoption in ITSM 

Machine Learning and ITSM
An example of a common use of machine learning in ITSM is the use of Chatbots or virtual assistants. While initially the goal is to program a bot with known questions and answers, there is also a significant focus on learning over time to respond to questions that have not been programmed previously based upon an attempt to understand the question’s intent.
ITSM and Automation
In ITSM, a vast amount of the work that is done to deliver and support services is repetative and is a likely candidate for automation. Teaching ITSM tools to look for patterns and learn from past events, incidents, problems, known errors
https://www.axelos.com/news/blogs/april-2019/machine-learning-is-the-key-to-ai-adoption-in-itsm


  • ITIL 4 and cloud-based services


The first wave of change occurred with organizations outsourcing more and more IT infrastructure and applications to the cloud. Advantages often cited for this move include cost reduction, decreased IT asset ownership and reduced complexity of IT.
The second wave of change is happening with organizations contemplating the concept of digital transformation, and then embarking on a digital transformation journey.
how new structures in ITIL 4 relate to using cloud services. These include:

    Impact of Cloud on IT Service Management & ITIL Lifecycle Processes
    ITIL 4 updated for a digital world
    Service Value System Considerations for Cloud
    Service Value Chains. Adapted for Cloud
    Value Stream Mapping
    Service Financial Management
    Service Level Agreements
    DevOps, Change Control and Velocity of Change
It is worth remembering that the proliferation of cloud-based services does not change the fundamentals of what frameworks such as ITIL, or a movement such as IT service management, aim at. They want to achieve quality products and services which are fit for purpose, fit for use and aligned to the strategic goals and needs of the organization.
https://www.axelos.com/news/blogs/march-2019/itil-4-and-cloud-based-services


  • ITIL 4 components

ITIL 4 consists of two key components:
    The four dimensions model
    The service value system (SVS).

ITIL 4 management practices
ITIL 4 includes 34 management practices as "sets of organizational resources designed for performing work or accomplishing an objective". For each practice, ITIL 4 provides various types of guidance, such as key terms and concepts, success factors, key activities, information objects, etc.

The 34 ITIL 4 practices are grouped into three categories:

    General management practices
    Service management practices
    Technical management practices
    

ITIL 4 and ITIL V3: What's the difference?
The service lifecycle has been dropped in ITIL 4 and the processes replaced with practices. But many of the ITIL 4 practices clearly correspond to the previous ITIL V3 processes. 
ITIL 4 also provides advice for integrating ITIL with other frameworks and methodologies like DevOps, Lean and Agile. 
https://wiki.en.it-processmaps.com/index.php/ITIL_4


Monday, June 15, 2020

container runtimes


  • Container Runtime


A container runtime a lower level component typically used in a Container Engine but can also be used by hand for testing. The Open Containers Initiative (OCI) Runtime Standard reference implementation  is runc. This is the most widely used container runtime, but there are others OCI compliant runtimes, such as crun, railcar, and katacontainers. Docker, CRI-O, and many other Container Engines rely on runc.

Kernel Namespace
When discussing containers, Kernel namespaces are perhaps the most important data structure, because they enable containers as we know them today. Kernel namespaces enable each container to have it’s own mount points, network interfaces, user identifiers, process identifiers, etc.
When you type a command in a Bash terminal and hit enter, Bash makes a request to the kernel to create a normal Linux process using a version of the exec() system call. A container is special because when you send a request to a container engine like docker, the docker daemon makes a request to the kernel to create a containerized process using a different system call called clone(). This clone() system call is special because it can create a process with its own virtual mount points, process ids, user ids, network interfaces, hostname, etc

https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/#h.6yt1ex5wfo55


  • In computer programming, a runtime system, also called runtime environment, primarily implements portions of an execution model.Most programming languages have some form of runtime system that provides an environment in which programs run. This environment may address a number of issues including the management of application memory, how the program accesses variables, mechanisms for passing parameters between procedures, interfacing with the operating system, and otherwise. The compiler makes assumptions depending on the specific runtime system to generate correct code. Typically the runtime system will have some responsibility for setting up and managing the stack and heap, and may include features such as garbage collection, threads or other dynamic features built into the language

https://en.wikipedia.org/wiki/Runtime_system


  • User namespaces allow non-root users to pretend to be the root•Root-in-UserNS can have “fake” UID 0 and also create other namespaces (MountNS, NetNS..) 

https://indico.cern.ch/event/788994/contributions/3307330/attachments/1846774/3030272/CERN_Rootless_Containers__Unresolved_Issues.pdf

Thursday, June 4, 2020

podman buildah lippod Skopeo


  •     Buildah to facilitate building of OCI images

    Skopeo for sharing/finding container images on Docker registries, the Atomic registry, private registries, local directories and local OCI-layout directories.
    Podman for running containers without need for daemon.
    https://computingforgeeks.com/how-to-install-podman-on-ubuntu/


  •  How Docker CLI Works

   
The Docker CLI is a client/server operation and the Docker CLI communicates with the Docker engine when it wants to create or manipulate the operations of a container. This client/server architecture can lead into problems in production because one, you have to start the Docker daemon before Docker CLI comes alive. The Docker CLI then sends an API call to the Docker Engine to launch Open Container Initiative (OCI) Container runtime, in most cases runc, to start the container (projectatomic.io). What this means is that the launched containers are child processes of the Docker Engine.

What is Podman?
What then is Podman? Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System

Docker vs Podman

The major difference between Docker and Podman is that there is no daemon in Podman. It uses container runtimes as well for example runc but the launched containers are direct descendants of the podman process. This kind of architecture has its advantages such as the following:

    Applied Cgroups or security constraints still control the container: Whatever cgroup constraints you apply on the podman command, the containers launched will receive those same constraints directly.
    Advanced features of systemd can be utilized using this model: This can be done by placing podman into a systemd unit file and hence achieving more.

What about Libpod?
Libpod just provides a library for applications looking to use the Container Pod concept, popularized by Kubernetes.
It allows other tools to manage pods/container (projectatomic.io). Podman is the default CLI tool for using this library.
There are other two important Libraries that make Podman possible:
    containers/storage – This library allows one to use copy-on-write (COW) file systems, required to run containers.
    containers/image – This library that allows one to download and install OCI Based Container Images from containers registries like Docker.io, Quay, and Artifactory, as well as many others (projectatomic.io).

A good example is that you can be running a full Kubernetes environment with CRI-O, building container images using Buildah and managing your containers and pods with Podman at the same time (projectatomic.io).

https://computingforgeeks.com/using-podman-and-libpod-to-run-docker-containers/


  • Podman helps users move to Kubernetes

Podman provides some extra features that help developers and operators in Kubernetes environments. There are extra commands provided by Podman that are not available in Docker. If you
are familiar with Docker and are considering using Kubernetes/OpenShift as your container platform, then Podman can help you.
Podman can generate a Kubernetes YAML file based on a running container using podman generate kube. The command podman pod can be used to help debug running Kubernetes pods along with the standard container commands.
You can either build using a Dockerfile using podman build or you can run a container and make lots of changes and then commit those changes to a new image tag

What is Buildah and why would I use it?
Podman does do builds and for those familiar with Docker, the build process is the same. You can either build using a Dockerfile using podman build or you can run a container and make lots of changes and then commit those changes to a new image tag

Buildah can be described as a superset of commands related to creating and managing container images and, therefore, it has much finer-grained control over images. Podman’s build command contains a subset of the Buildah functionality. It uses the same code as Buildah for building.
The most powerful way to use Buildah is to write Bash scripts for creating your images—in a similar way that you would write a Dockerfile.
When Kubernetes moved to CRI-O based on the OCI runtime specification, there was no need to run a Docker daemon and, therefore, no need to install Docker on any host in the Kubernetes cluster for running pods and containers.
Kubernetes could call CRI-O and it could call runC directly.This, in turn, starts the container processes.
However, if we want to use the same Kubernetes cluster to do builds, as in the case of OpenShift clusters, then we needed a new tool to perform builds that would not require the Docker daemon and subsequently require that Docker be installed.
Such a tool, based on the containers/storage and containers/image projects, would also eliminate the security risk of the open Docker daemon socket during builds, which concerned many users

There are a couple of extra things practitioners need to understand about Buildah:
It allows for finer control of creating image layers.
Buildah’s run command is not the same as Podman’s run command.  Because Buildah is for building images, the run command is essentially the same as the Dockerfile RUN command.
Buildah can build images from scratch, that is, images with nothing in them at all.
In fact, looking at the container storage created as a result of a buildah from scratch command yields an empty directory. This is useful for creating very lightweight images that contain only the packages needed in order to run your application.

A good example use case for a scratch build is to consider the development images versus staging or production images of a Java application. During development, a Java application container image may require the Java compiler and Maven and other tools. But in production, you may only require the Java runtime and your packages. And, by the way, you also do not require a package manager such as DNF/YUM or even Bash. Buildah is a powerful CLI for this use case

Now that we had solved the Kubernetes runtime issue with CRI-O and runC, and we had solved the build problem with Buildah, there was still one reason why Docker was still needed on a Kubernetes host: debugging
How can we debug container issues on a host if we don’t have the tools to do it? We would need to install Docker, and then we are back where we started with the Docker daemon on the host. Podman solves this problem.
Podman becomes a tool that solves two problems. It allows operators to examine containers and images with commands they are familiar with using. And it also provides developers with the same tools.
https://developers.redhat.com/blog/2019/02/21/podman-and-buildah-for-docker-users/


When considering how to implement something like this, we considered the following developer and user workflow:

    Create containers/pods locally using Podman on the command line.
    Verify these containers/pods locally or in a localized container runtime (on a different physical machine).
    Snapshot the container and pod descriptions using Podman and help users re-create them in Kubernetes.
    Users add sophistication and orchestration (where Podman cannot) to the snapshot descriptions and leverage advanced functions of Kubernetes
https://developers.redhat.com/blog/2019/01/29/podman-kubernetes-yaml/


  • The main motivation was to move away from the need of having a daemon that requires root access.

Podman, Skopeo and Buildah are a set of tools that you can use to manage and run container images

https://itnext.io/podman-and-skopeo-on-macos-1b3b9cf21e60

Tuesday, June 2, 2020

service mesh


  • What is a Service Mesh? 

a service mesh is a dedicated infrastructure layer for handling service-to-service communication. Although this definition sounds very much like a CNI implementation on Kubernetes, there are some differences. A service mesh typically sits on top of the CNI and builds on its capabilities. It also adds several additional capabilities like service discovery and security.
The components of a service mesh include:

    Data plane - made up of lightweight proxies that are distributed as sidecars. Proxies include NGINX, or envoy; all of these technologies can be used to build your own service mesh in Kubernetes. In Kubernetes, the proxies are run as cycles and are in every Pod next to your application.
    Control plane - provides the configuration for the proxies, issues the TLS certificates authority, and contain the policy managers. It can collect telemetry and other metrics and some service mesh implementations also include the ability to perform tracing.

How is a service mesh useful?
The example shown below illustrates a Kubernetes cluster with an app composed of these services: a front-end, a backend and a database.

What a service mesh provides?

Not all of the services meshes out there have all of these capabilities, but in general, these are the features you gain:

    Service Discovery (eventually consistent, distributed cache)
    Load Balancing (least request, consistent hashing, zone/latency aware)
    Communication Resiliency (retries, timeouts, circuit-breaking, rate limiting)
    Security (end-to-end encryption, authorization policies)
    Observability (Layer 7 metrics, tracing, alerting)
    Routing Control (traffic shifting and mirroring)
    API (programmable interface, Kubernetes Custom Resource Definitions (CRD))
 
Differences between service mesh implementations?
Istio
Has a Go control plane and uses Envoy as a proxy data plane. Istio is a complex system that does many things, like tracing, logging, TLS, authentication, etc. A drawback is the resource hungry control plane
The more services you have the more resources you need to run them on Istio.
AWS App Mesh
still lacks many of the features that Istio has. For example it doesn’t include mTLS or traffic policies.
Linkerd v2
Also has a Go control plane and a Linkerd proxy data plane that is written in Rust.
Linkerd has some distributed tracing capabilities and just recently implemented traffic shifting.
The current 2.4 release implements the Service Mesh Interface (SMI) traffic split API, that makes it possible to automate Canary deployments and other progressive delivery strategies with Linkerd and Flagger.
Consul Connect
Uses a Consul control plane and requires the data plane to managed inside an app. It does not implement Layer 7 traffic management nor does it support Kubernetes CRDs.

How does progressive delivery work with a service mesh?
Progressive delivery is Continuous Delivery with fine-grained control over the blast radius. This means that you can deliver new features of your app to a certain percentage of your user base.
In order to control the progressive deployments, you need the following:
    User segmentation (provided by the service mesh)
    Traffic shifting Management (provided by the service mesh)
    Observability and metrics (provided by the service mesh)
    Automation (service mesh add-on like Flagger)

Canary
A canary is used for when you want to test some new functionality typically on the backend of your application. Traditionally you may have had two almost identical servers: one that goes to all users and another with the new features that gets rolled out to a subset of users and then compared. When no errors are reported, the new version can gradually roll out to the rest of the infrastructure.

https://www.weave.works/blog/introduction-to-service-meshes-on-kubernetes-and-progressive-delivery



  • The Common Attributes of a Service Mesh


In the basic architectural diagram above,
the green boxes in the data plane represent applications,
the blue squares are service mesh proxies,
and the rectangles are application endpoints (a pod, a physical host, etc).
The control plane provides a centralized API for controlling proxy behavior in aggregate.
While interactions with the control plane can be automated (e.g. by a CI/CD pipeline), it’s typically where you–as a human–would interact with the service mesh.

Any service mesh in this guide has certain features

    Resiliency features (retries, timeouts, deadlines, etc)
    Cascading failure prevention (circuit breaking)
    Robust load balancing algorithms
    Control over request routing (useful for things like CI/CD release patterns)
    The ability to introduce and manage TLS termination between communication endpoints
    Rich sets of metrics to provide instrumentation at the service-to-service layer


What’s Different About a Service Mesh?
The service mesh exists to make your distributed applications behave reliably in production.
With microservices, service-to-service communication becomes the fundamental determining factor for how your applications behave at runtime.
Application functions that used to occur locally as part of the same runtime instead occur as remote procedure calls being transported over an unreliable network.

Product Comparisons
Linkerd
Built on Twitter’s Finagle library, Linkerd is written in Scala and runs on the JVM
Linkerd includes both a proxying data plane and the Namerd (“namer-dee”) control plane all in one package.
Notable features include:

    All of the “table stakes” features (listed above),
    Support for multiple platforms (Docker, Kubernetes, DC/OS, Amazon ECS, or any stand-alone machine),
    Built-in service discovery abstractions to unite multiple systems,
    Support for gRPC, HTTP/2, and HTTP/1.x requests + all TCP traffic.
Envoy
It is written as a high performance C++ application proxy designed for modern cloud-native services architectures.
Envoy is designed to be used either as a standalone proxying layer or as a “universal data plane” for service mesh architectures
Specifically on serving as a foundation for more advanced application proxies, Envoy fills the “data plane” portion of a service mesh architecture.
Envoy is a performant solution with a small resource footprint that makes it amenable to running it as either a shared-proxy or sidecar-proxy deployment mode
You can also find Envoy embedded in security frameworks, gateways, or other service mesh solutions like Istio
Notable features include:

    All of the “table stakes” features (when paired with a control plane, like Istio),
    Low p99 tail latencies at scale when running under load,
    Acts as a L3/L4 filter at its core with many L7 filters provided out of the box,
    Support for gRPC, and HTTP/2 (upstream/downstream),
    API-driven, dynamic configuration, hot reloads,
    Strong focus on metric collection, tracing, and overall observability.

Istio
Istio is designed to provide a universal control plane to manage a variety of underlying service proxies (it pairs with Envoy by default)
Istio initially targeted Kubernetes deployments, but was written from the ground up to be platform agnostic
The Istio control plane is meant to be extensible and is written in Go.
Its design goals mean that components are written for a number of different applications, which is part of what makes it possible to pair Istio with a different underlying data plane, like the commercially-licensed Nginx proxy. Istio must be paired with an underlying proxy.
Notable features include:

    All of the table stakes features (when paired with a data plane, like Envoy),
    Security features including identity, key management, and RBAC,
    Fault injection,
    Support for gRPC, HTTP/2, HTTP/1.x, WebSockets, and all TCP traffic,
    Sophisticated policy, quota, and rate limiting,
    Multi-platform, hybrid deployment.

Conduit
Conduit aims to drastically simplify the service mesh user experience for Kubernetes.
Conduit contains both a data plane (written in Rust) and a control plane (written in Go).

    All of the table stakes features (some are pending roadmap items as of Apr 2018),
    Extremely fast and predictable performance (sub-1ms p99 latency),
    A native Kubernetes user experience (only supports Kubernetes),
    Support for gRPC, HTTP/2, and HTTP/1.x requests + all TCP traffic.
https://thenewstack.io/which-service-mesh-should-i-use/



  • Kubernetes Service Mesh: A Comparison of Istio, Linkerd and Consul

Cloud-native applications are often architected as a constellation of distributed microservices, which are running in Containers.
This exponential growth in microservices creates challenges around figuring out how to enforce and standardize things like routing between multiple services/versions, authentication and authorization, encryption, and load balancing within a Kubernetes cluster.
Building on Service Mesh helps resolve some of these issues, and more. As containers abstract away the operating system from the application, Service Meshes abstract away how inter-process communications are handled.

What is Service Mesh
The thing that is most crucial to understand about microservices is that they are heavily reliant on the network.
Service Mesh manages the network traffic between services.
It does that in a much more graceful and scalable way compared to what would otherwise require a lot of manual, error-prone work and operational burden that is not sustainable in the long-run.
service mesh layers on top of your Kubernetes infrastructure and is making communications between services over the network safe and reliable.

Service mesh allows you to separate the business logic of the application from observability, and network and security policies. It allows you to connect, secure, and monitor your microservices.

    Connect: Service Mesh enables services to discover and talk to each other. It enables intelligent routing to control the flow of traffic and API calls between services/endpoints. These also enable advanced deployment strategies such as blue/green, canaries or rolling upgrades, and more.
    Secure: Service Mesh allows you secure communication between services. It can enforce policies to allow or deny communication. E.g. you can configure a policy to deny access to production services from a client service running in development environment.
    Monitor: Service Mesh enables observability of your distributed microservices system. Service Mesh often integrates out-of-the-box with monitoring and tracing tools (such as Prometheus and Jaeger in the case of Kubernetes) to allow you to discover and visualize dependencies between services, traffic flow, API latencies, and tracing.
 
Service Mesh Options for Kubernetes:

Consul
Consul is part of HashiCorp’s suite of infrastructure management products
it started as a way to manage services running on Nomad and has grown to support multiple other data center and container management platforms including Kubernetes.
Consul Connect uses an agent installed on every node as a DaemonSet which communicates with the Envoy sidecar proxies that handles routing & forwarding of traffic.
Istio
Istio has separated its data and control planes by using a sidecar loaded proxy which caches information so that it does not need to go back to the control plane for every call
The control planes are pods that also run in the Kubernetes cluster, allowing for better resilience in the event that there is a failure of a single pod in any part of the service mesh
Linkerd
its architecture mirrors Istio’s closely, with an initial focus on simplicity instead of flexibility.
This fact, along with it being a Kubernetes-only solution
While Linkerd v1.x is still supported, and it supports more container platforms than Kubernetes; new features (like blue/green deployments) are focused on v2. primarily.

Istio has the most features and flexibility of any of these three service meshes by far, but remember that flexibility means complexity, so your team needs to be ready for that.
For a minimalistic approach supporting just Kubernetes, Linkerd may be the best choice.
If you want to support a heterogeneous environment that includes both Kubernetes and VMs and do not need the complexity of Istio, then Consul would probably be your best bet.

Migrating between service mesh solutions
Note that service mesh is not as an intrusive transformation as the one from monolithic applications to microservices, or from VMs to Kubernetes-based applications.
Since most meshes use the sidecar model, most services don’t know that they run as a mesh.
Service Mesh is useful for any type of microservices architecture since it helps you control traffic, security, permissions, and observability.

you can start standardizing on Service Mesh in your system design to lay the building blocks and the critical components for large-scale operations

Improving observability into distributed services: For example, If one service in the architecture becomes a bottleneck, the common way to handle it is through re-tries, but that can worsen the bottleneck due to timeouts. With service mesh, you can easily break the circuit to failed services to disable non-functioning replicas and keep the API responsive.

Blue/green deployments: Service mesh allows you to implement Blue/Green deployments to safely rollout new upgrades of the applications without risking service interruption.
First, you expose only a small subset of users to the new version, validate it, then proceed to release it to all instances in Production.

Chaos monkey/ testing in production scenarios: with the ability to inject delays, faults to improve the robustness of deployments

‘Bridge’ / enabler for modernizing legacy applications:If you’re in the throes of modernizing your existing applications to Kubernetes-based microservices, you can use service mesh as a ‘bridge’ while you’re de-composing your apps.
you can use service mesh as a ‘bridge’ while you’re de-composing your apps. You can register your existing applications as ‘services’ in the Istio service catalog and then start migrating them gradually to Kubernetes without changing the mode of communication between services – like a DNS router. This use case is similar to using Service Directory.

API Gateway:If you’re bought into the vision of service mesh and want to start the rollout,you can already have your Operations team start learning the ropes of using service mesh by deploying it simply to measure your API usage.

Service mesh becomes the dashboard for microservices architecture. It’s the place for troubleshooting issues, enforcing traffic policies, rate limits, and testing new code. It’s your hub for monitoring, tracing and controlling the interactions between all services – how they are connected, perform and secured.
https://platform9.com/blog/kubernetes-service-mesh-a-comparison-of-istio-linkerd-and-consul/


  • HTTP/2 (originally named HTTP/2.0) is a major revision of the HTTP network protocol used by the World Wide Web.


Differences from HTTP 1.1
The proposed changes do not require any changes to how existing web applications work, but new applications can take advantage of new features for increased speed
What is new is how the data is framed and transported between the client and the server.Websites that are efficient minimize the number of requests required to render an entire page by minifying (reducing the amount of code and packing smaller pieces of code into bundles, without reducing its ability to function) resources such as images and scripts. However, minification is not necessarily convenient nor efficient and may still require separate HTTP connections to get the page and the minified resources. HTTP/2 allows the server to "push" content, that is, to respond with data for more queries than the client requested. This allows the server to supply data it knows a web browser will need to render a web page, without waiting for the browser to examine the first response, and without the overhead of an additional request cycle.

https://en.wikipedia.org/wiki/HTTP/2


  • WebSocket is a computer communications protocol, providing full-duplex communication channels over a single TCP connection. The WebSocket protocol was standardized by the IETF as RFC 6455 in 2011, and the WebSocket API in Web IDL is being standardized by the W3C.WebSocket is distinct from HTTP. Both protocols are located at layer 7 in the OSI model and depend on TCP at layer 4

Although they are different, RFC 6455 states that WebSocket "is designed to work over HTTP ports 80 and 443 as well as to support HTTP proxies and intermediaries," thus making it compatible with the HTTP protocol. To achieve compatibility, the WebSocket handshake uses the HTTP Upgrade header[1] to change from the HTTP protocol to the WebSocket protocol.
The WebSocket protocol enables interaction between a web browser (or other client application) and a web server with lower overhead than half-duplex alternatives such as HTTP polling, facilitating real-time data transfer from and to the server.
https://en.wikipedia.org/wiki/WebSocket




  • Why gRPC?


gRPC is a modern open source high performance RPC framework that can run in any environment. It can efficiently connect services in and across data centers with pluggable support for load balancing, tracing, health checking and authentication.
Bi-directional streaming and integrated auth
Bi-directional streaming and fully integrated pluggable authentication with HTTP/2-based transport
https://grpc.io/