Sunday, January 17, 2021

network topologies

  •  Network Topologies

This section will be divided into 4 major categories. Within these sections, discussions of hybrid topologies may take place.


Bus Topology



Ring Topology



Star Topology



Mesh Topology



Bus Topology is the simplest way to connect multiple clients

Ring Topology is where each node connects exactly to two other nodes, forming a single continuous pathway for signals through each node in a ring.

Star Topology is one of the most common networking topologies

Mesh Topology is where each node in a network may act as an independent router


http://webpage.pace.edu/ms16182p/networking/topologies.html


  • Physical Network Topologies


2.1 Bus Network Topology

In Bus Network Topology a single cable is used to connect all devices on the net. This cable is often referred to as the network Backbone. When communication occurs between nodes the device sending the message broadcasts to all nodes on the network, but only the desired recipient digests the message. Advantages of this type of Physical Topology include ease of installation and minimization of the required cabling. 


2.2 Ring Network Topology

Ring Network Topology has each node in a network connected to two other nodes in the network in conjunction with the first and last nodes being connected. Messages from one node to another then travel from originator to destination via the set of intermediate nodes. The intermediate nodes serve as active repeaters for messages intended for other nodes. Some forms of Ring Network Topology have messages traveling in a common direction about the ring (either clockwise or counterclockwise) while other forms of this type of configuration (called Bi-directional Rings) have messages flowing in either direction with the help of two cables between each connected node


2.3 Star Network Topology

Star Network Topology requires the use of a central top level node to which all other nodes are connected. This top level node may be a computer, or a simple switch, or just a common connection point. Messages received by the top level node can either be broadcast to all subordinate nodes, or if the top level device is of high enough fidelity, sent only to the desired subordinate node. Inter node messaging delays are reduced with this configuration. An important advantage of the Star Network Topology comes from the localization of cabling failures inherent in this configuration. Failure in the connection between the top level node and any subordinate node, or failure in a subordinate node will not disrupt the entire network. 


2.4 Tree Network Topology

Tree Network Topology is constructed from either making a set of Star Network Topologies subordinate to a central node, or by linking a set of Star Network Topologies together directly via a bus, thereby distributing the functionality of the central node among several Star Network Topology top level nodes .


2.5 Mesh Network Topology

Mesh Network Topologies capitalize on path redundancy. This Topology is preferred when traffic volume between nodes is large. A proportion of nodes in this type of network have multiple paths to another destination node. With the exception of the Bi-directional Ring ( and this was only when a failure was detected ) each of the topologies discussed so far had only one path from message source to message destination. Thus the probability of single point network failure is greatly minimized with Mesh Network Topology. A major advantage of the Mesh Network Topology is that source nodes determine the best route from sender to destination based upon such factors connectivity, speed, and pending node tasks. A disadvantage of Mesh Network Topologies is the large cost incurred in setting up the network. A further disadvantage of this type of network is the requirement for each node to have routing algorithm for path computation. A full mesh is described as each node being directly connected to every other node in the network



https://www.cse.wustl.edu/~jain/cse567-08/ftp/topology/index.html

Tuesday, January 5, 2021

UAV Drone Forensics

  •  UNMANNED AERIAL VEHICLE FORENSICINVESTIGATION PROCESS: DJI PHANTOM 3 DRONE AS A CASE STUDY

However, existing UAV/drone forensics generally  rely  on  conventional  digital  forensic  investigation  guidelines  such  as  those  ofACPO  and  NIST, which  may  not  be  entirely  fit-for-purpose. 


4.2 Proposed process

In  our  process,  there  are  three  main stages,    namely:    preparation,    examination    and analysis/report.The first stage includes Steps 1 to 6. Steps 7 to  17  are  part of  the  second  stage,  and  the final stage includes Steps 18 to 20

Step   1 -Identify   and determine the   chain   of command

Step  2 -Have conventionalforensic  practices(e.g. DNA, fingerprints, and   ballistic)already   been implemented?

Step 3–Identify the role of the device in conducting the offence(Offence analysis)

Step 4-Photographs

Step 5-Identify the make and model

Step 6-Open source investigation to identifydevice characteristics,  potential  data  storage  locations, and availableforensic/non-forensic tools


Step    7-Identify capabilities    (Video/Audio recording, carrying capacity and technique)

Step 8 -Identify potential modifications

Step 9 -Identify data storage locations.

Step 10 -Identify ports

Step 11 -Extract removable data storage mediums

Step 12 -Preserve evidence –Clone/forensic copy of storage medium

Step   13 -Traditional   interrogation   of   storage medium -use certified forensic tools

Step 14 -Extended interrogation of storage medium

This  step  is somewhat unique  to  UAV  forensics.  Typical    digital    forensic analysis    is    normally conducted  using commercial forensic  tool,  which will usually have a proven record for accuracy.Any examination using non-validated toolsis considered a risk. However, until commercial forensic tools for all UAVs are available, we may have little choicebut to  rely  on  open  source  tools  to  extract  data  of forensic interest

Step  15 -Interrogation  of  the  UAV/drone -Potentially  using  a  clone  of  any  storage  medium identified

Step 16 -Interrogation of peripheral devices: flight controller, mobile device, etc

Step  17 -Extract  removable  data  storage  mediums (Destructive


Step 18–Initial reviewof extracted data

Step 19 –Interpreting and translating of data -Into a human readable and evidential format

Step 20 –Report/Statement


Besides,  UAV  could  be  integrated with  radio  communicationservicesin  the  future. Hence, forensic acquisition and analysis of artefacts from radio-communication services[28]can alsobe explored


https://arxiv.org/ftp/arxiv/papers/1804/1804.08649.pdf

Monday, January 4, 2021

gitops

  •   What is GitOps?

GitOps is a way of implementing Continuous Deployment for cloud native applications. It focuses on a developer-centric experience when operating infrastructure, by using tools developers are already familiar with, including Git and Continuous Deployment tools.


The core idea of GitOps is having a Git repository that always contains declarative descriptions of the infrastructure currently desired in the production environment and an automated process to make the production environment match the described state in the repository. If you want to deploy a new application or update an existing one, you only need to update the repository - the automated process handles everything else. It’s like having cruise control for managing your applications in production.


Why should I use GitOps?

Deploy Faster More Often

What is unique about GitOps is that you don’t have to switch tools for deploying your application. Everything happens in the version control system you use for developing the application anyways.

When we say “high velocity” we mean that every product team can safely ship updates many times a day — deploy instantly, observe the results in real time, and use this feedback to roll forward or back.


Easy and Fast Error Recovery

This makes error recovery as easy as issuing a git revert and watching your environment being restored.The Git record is then not just an audit log but also a transaction log. You can roll back & forth to any snapshot.


Easier Credential Management

your environment only needs access to your repository and image registry. That’s it. You don’t have to give your developers direct access to the environment.

kubectl is the new ssh. Limit access and only use it for deployments when better tooling is not available.


Self-documenting Deployments

With GitOps, every change to any environment must happen through the repository. You can always check out the master branch and get a complete description of what is deployed where plus the complete history of every change ever made to the system. And you get an audit trail of any changes in your system


Shared Knowledge in Teams

Using Git to store complete descriptions of your deployed infrastructure allows everybody in your team to check out its evolution over time. With great commit messages everybody can reproduce the thought process of changing infrastructure and also easily find examples of how to set up new systems.


How does GitOps work?


Environment Configurations as Git repository

GitOps organizes the deployment process around code repositories as the central element. There are at least two repositories: the application repository and the environment configuration repository. The application repository contains the source code of the application and the deployment manifests to deploy the application. The environment configuration repository contains all deployment manifests of the currently desired infrastructure of an deployment environment. 


Push-based vs. Pull-based Deployments

There are two ways to implement the deployment strategy for GitOps: Push-based and Pull-based deployments. 

The difference between the two deployment types is how it is ensured, that the deployment environment actually resembles the desired infrastructure.

When possible, the Pull-based approach should be preferred as it is considered the more secure and thus better practice to implement GitOps.


Push-based Deployments

The Push-based deployment strategy is implemented by popular CI/CD tools such as Jenkins, CircleCI, or Travis CI. The source code of the application lives inside the application repository along with the Kubernetes YAMLs needed to deploy the app. Whenever the application code is updated, the build pipeline is triggered, which builds the container images and finally the environment configuration repository is updated with new deployment descriptors.

With this approach it is indispensable to provide credentials to the deployment environment.

In some use cases a Push-based deployment is inevitable when running an automated provisioning of cloud infrastructure. In such cases it is strongly recommended to utilize the fine-granular configurable authorization system of the cloud provider for more restrictive deployment permissions.

Another important thing to keep in mind when using this approach is that the deployment pipeline only is triggered when the environment repository changes. It can not automatically notice any deviations of the environment and its desired state. This means, it needs some way of monitoring in place, so that one can intervene if the environment doesn’t match what is described in the environment repository


Pull-based Deployments

The Pull-based deployment strategy uses the same concepts as the push-based variant but differs in how the deployment pipeline works. Traditional CI/CD pipelines are triggered by an external event, for example when new code is pushed to an application repository. With the pull-based deployment approach, the operator is introduced. It takes over the role of the pipeline by continuously comparing the desired state in the environment repository with the actual state in the deployed infrastructure. Whenever differences are noticed, the operator updates the infrastructure to match the environment repository. Additionally the image registry can be monitored to find new versions of images to deploy.

Just like the push-based deployment, this variant updates the environment whenever the environment repository changes. However, with the operator, changes can also be noticed in the other direction. Whenever the deployed infrastructure changes in any way not described in the environment repository, these changes are reverted. This ensures that all changes are made traceable in the Git log, by making all direct changes to the cluster impossible.

This change in direction solves the problem of push-based deployments, where the environment is only updated when the environment repository is updated.

Most operators support sending mails or Slack notifications if it can not bring the environment to the desired state for any reason, for example if it can not pull a container image. 

Additionally, you probably should set up monitoring for the operator itself, as there is no longer any automated deployment process without it.

The operator should always live in the same environment or cluster as the application to deploy

seen with the push-based approach, where credentials for doing deployments are known by the CI/CD pipeline. 

When the actual deploying instance lives inside the very same environment, no credentials need to be known by external services.

The Authorization mechanism of the deployment platform in use can be utilized to restrict the permissions on performing deployments. 

When using Kubernetes, RBAC configurations and service accounts can be utilized.


Working with Multiple Applications and Environments

When you are using a microservices architecture, you probably want to keep each service in its own repository.

GitOps can also handle such a use case. You can always just set up multiple build pipelines that update the environment repository. From there on the regular automated GitOps workflow kicks in and deploys all parts of your application.

Managing multiple environments with GitOps can be done by just using separate branches in the environment repository.

You can set up the operator or the deployment pipeline to react to changes on one branch by deploying to the production environment and another to deploy to staging.


FAQ

Is my project ready for GitOps?

All you need to get started is infrastructure that can be managed with declarative Infrastructure as Code tools.

I don’t use Kubernetes. Can I still use GitOps?

In principle, you can use any infrastructure that can be observed and described declaratively, and has Infrastructure as Code tools available

However, currently most operators for pull-based GitOps are implemented with Kubernetes in mind.

Is GitOps just versioned Infrastructure as Code?

Declarative Infrastructure as Code plays a huge role for implementing GitOps, 

GitOps takes the whole ecosystem and tooling around Git and applies it to infrastructure. 

Continuous Deployment systems guarantee that the currently desired state of the infrastructure is deployed in the production environment.

Apart from that you gain all the benefits of code reviews, pull requests, and comments on changes for your infrastructure.

How to get secrets into the environment without storing them in git?

you have have secrets created within the environment which never leave the environment

For example, you provision a database within the environment and give the secret to the applications interacting with the database only.

Another approach is to add a private key once to the environment (probably by someone from a dedicated ops team) and from that point you can add secrets encrypted by the public key to the environment repository.

How does GitOps Handle DEV to PROD Propagation?

GitOps doesn’t provide a solution to propagating changes from one stage to the next one. We recommend using only a single environment and avoid stage propagation altogether. But if you need multiple stages (e.g., DEV, QA, PROD, etc.) with an environment for each, you need to handle the propagation outside of the GitOps scope, for example by some CI/CD pipeline.

We are already doing DevOps. What’s the difference to GitOps?

GitOps is a technique to implement Continuous Delivery. While DevOps and GitOps share principles like automation and self-serviced infrastructure, these shared principles certainly make it easier to adopt a GitOps workflow when you are already actively employing DevOps techniques.

So, is GitOps basically NoOps?

You can use GitOps to implement NoOps,

If you are using cloud resources anyway, GitOps can be used to automate those. 

Typically, however, some part of the infrastructure like the network configuration or the Kubernetes cluster you use isn’t managed by yourself decentrally but rather managed centrally by some operations team.

So operations never really goes away.

Is there also SVNOps?

If you prefer SVN over Git,you may need to put more effort into finding tools that work for you or even write your own.

All available operators only work with Git repository

https://www.gitops.tech/





















  •     ArgoCD: A GitOps operator for Kubernetes with a web interface
  •     Flux: The GitOps Kubernetes operator by the creators of GitOps — Weaveworks
  •     Gitkube: A tool for building and deploying docker images on Kubernetes using git push
  •     JenkinsX: Continuous Delivery on Kubernetes with built-in GitOps
  •     Terragrunt: A wrapper for Terraform for keeping configurations DRY, and managing remote state
  •     WKSctl: A tool for Kubernetes cluster configuration management based on GitOps principles
  •     Helm Operator: An operator for using GitOps on K8s with Helm
  •     werf: A CLI tool to build images and deploy them to Kubernetes via push-based approach

Friday, January 1, 2021

AI in Cybersecurity

  •  Machine Learning in Cybersecurity

we structured the report around the questions you should ask about ML tools. We chose this framing, rather than proposing a detailed guide of how to build an ML system in cybersecurity, because we want to enable you to learn what a good tool looks like.


1. What is your topic of interest?

2. What information will help you address the topic of interest?

3. How do you anticipate that an ML tool will address the topic of interest?

4. How will you protect the ML system against attacks in an adversarial, cybersecurity environment?

5. How will you find and mitigate unintended outputs and effects?

6. Can you evaluate the ML tool adequately, accounting for errors?

7. What alternative tools have you considered? What are the advantages and disadvantages of each one?

https://insights.sei.cmu.edu/cert/2019/12/machine-learning-in-cybersecurity.html