Sunday, February 24, 2019

Linux Virtual Server (LVS)


  • The LVS cluster system is also known as load balancing server cluster.

Build a high-performance and highly available server for Linux using clustering technology, which provides good scalability, reliability and serviceability.
http://www.linuxvirtualserver.org/whatis.html



  • TCPHA

TCPHA is a subproject of LVS.
TCPHA can be used to build a high performance and high available server based on a cluster of Linux servers.

TCPHA implements an architecture for scalable content-aware request distribution in cluster-base servers. It implements Kernel Layer-7 switching based on TCP Handoff for the Linux operating system. Since the overhead of layer-7 switching in user-space is very high, it is good to implement it inside the kernel in order to avoid the overhead of context switching and memory copying between user-space and kernel-space, furthermore, the responses are sent directly to clients, not passing the dispatcher, which will greatly improve the performance of cluster.

TCPHA, inspired by KTCPVS and IPVS, merges their strongpoints. Otherwise, the installation and configuration are very simple.

http://dragon.linux-vs.org/~dragonfly/htm/tcpha.htm


  • The goal of Linux Virtual Server (LVS) is to provide a basic framework that directs network connections to multiple servers that share their workload.

Linux Virtual Server is a cluster of servers (one or more load balancers and several real servers for running services) which appears to be one large, fast server to an outside client.
This apparent single server is called a virtual server.

The real servers and the load balancers may be interconnected by either high-speed LAN or by geographically dispersed WAN. The load balancers can dispatch requests to the different servers. They make parallel services of the cluster appear as a virtual service on a single IP address (the virtual IP address or VIP). Request dispatching can use IP load balancing technologies or application-level load balancing technologies. Scalability of the system is achieved by transparently adding or removing nodes in the cluster. High availability is provided by detecting node or daemon failures and reconfiguring the system appropriately.
https://www.suse.com/documentation/sle_ha/book_sleha/data/cha_ha_lvs.html


  • TCP Splicing

TCPSP implements tcp splicing for the Linux kernel. The tcp splicing is a technique to splice two connections inside the kernel, so that data relaying between the two connections can be run at near router speeds. This technique can be used to speed up layer-7 switching, web proxy and application firewall running in the user space.
TCPSP is released as a small software component of the Linux Virtual Server project.
http://www.linuxvirtualserver.org/software/tcpsp/index.html



  • Job Scheduling Algorithms in Linux Virtual Server


    Round-Robin Scheduling
    Weighted Round-Robin Scheduling
    Least-Connection Scheduling
    Weighted Least-Connection Scheduling
    Locality-Based Least-Connection Scheduling
    Locality-Based Least-Connection with Replication Scheduling
    Destination Hashing Scheduling
    Source Hashing Scheduling
    Shortest Expected Delay Scheduling
    Never Queue Scheduling

http://www.linuxvirtualserver.org/docs/scheduling.html

Thursday, February 21, 2019

Requirements Capture


  • Requirements Capture is the process of analysing and identifying the requirements of a system and often involves a series of facilitated workshops attended by stakeholders of the system.

http://dthomas-software.co.uk/consulting/requirements-analysis-consulting/

  • User Requirements Capture is a research exercise that is undertaken early in a project life cycle to establish and qualify the scope of the project. The aim of the research is to understand the product from a user's perspective, and to establish users' common needs and expectations. The user requirements capture is useful for projects that have a lack of focus or to validate the existing project scope. The research provides an independent user perspective when a project has been created purely to fulfil a business need. The requirements capture findings are then used to balance the business goals with the user needs to ensure the project is a success

https://www.projectsmart.co.uk/what-is-user-requirements-capture.php

Tuesday, February 19, 2019

SOC (security operations center)

Building a Security Operations Center
building a SOC requires collaboration and communication among multiple functions (people), disparate security products (technology), and varying processes and procedures (processes)
https://finland.emc.com/collateral/white-papers/rsa-advanced-soc-solution-sans-soc-roadmap-white-paper.pdf


  • The Five Characteristics of an Intelligence-Driven Security Operations Center



Key Challenges
Traditional security operations centers:
Rely primarily on prevention technologies, and rule and signature-based detection mechanisms that require prior knowledge of attacker methods, both of which are insufficient to protect against current threats
Treat intelligence (TI) as a one-way product to be consumed, rather than as a process, leading to an intelligence-poor security strategy
Treat incident response as an exception-based process, versus a continuous one

Recommendations
Adapt a mindset that is based on the assumption that they have already been compromised

an intelligence-driven SOC approach with these five characteristics:
use multisourced threat intelligence strategically and tactically;
use advanced analytics to operationalize security intelligence
automate whenever feasible
adopt an adaptive security architecture
proactively hunt and investigate


https://www.ciosummits.com/Online_Assets_Intel_Security_Gartner.pdf
  • An information security operations center (or "SOC") is a location where enterprise information systems (web sites, applications, databases, data centers and servers, networks, desktops and other endpoints) are monitored, assessed, and defended.
https://en.wikipedia.org/wiki/Information_security_operations_center
Security Operations Center (SOC) - DIY or Outsource?
Discover ideas about Security Solutions
DTS Solution - Building a SOC (Security Operations Center)
What is a Security Operations Center (SOC)?
SOC Team Presentation 1 - Security Operations Center
Security Operation Center - Design & Build
  • network operations center(NOC) is one or more locations from which control is exercised over a computer, television broadcast, or telecommunications network.

Fortinet Security Fabric
Splunk: Using Big Data for Cybersecurity
Splunk Inc. provides the leading software platform for real-time Operational Intelligence.
How to configure Splunk Log Forwarder in McAfee ESM If you will be forwarding events from various devices through a syslog relay 
Enterprise Security
(TVM) Threat and Vulnerability Management
(SOIR) security operations incident response


Real-Time Big Data Processing with Spark and MemSQL
Data Lake 3.0, Part II: A Multi-Colored YARN 
Best Practices for Building a Data Lake with Amazon S3
  • You see servers and devices, apps and logs, traffic and clouds. We see data—everywhere. Splunk® offers the leading platform for Operational Intelligence. 
Machine-generated data is one of the fastest growing and complex areas of big data. It's also one of the most valuable, containing a definitive record of all user transactions, customer behavior, machine behavior, security threats, fraudulent activity and more. Splunk turns machine data into valuable insights no matter what business you're in. It's what we call Operational Intelligence
http://www.splunk.com/
  • Welcome to Apache Flume

Flume is a distributed, reliable, and available service for efficiently collecting, aggregating, and moving large amounts of log data. It has a simple and flexible architecture based on streaming data flows. It is robust and fault tolerant with tunable reliability mechanisms and many failover and recovery mechanisms. It uses a simple extensible data model that allows for online analytic applicatio
https://flume.apache.org/
Flume Architecture
Real Time Data Processing using Spark Streaming
Why Lambda Architecture in Big Data Processing

  • What is OpenSOC?

OpenSOC is a Big Data security analytics framework designed to consume and monitor network traffic and machine exhaust data of a data center. OpenSOC is extensible and is designed to work at a massive scale.
The OpenSOC project is a collaborative open source development project dedicated to providing an extensible and scalable advanced security analytics tool. It has strong foundations in the Apache Hadoop Framework and values collaboration for high-quality community-based open source development
http://opensoc.github.io/
Navigating the maze of Cyber Security Intelligence and Analytics

  • Fluentd is an open source data collector for unified logging layer.

Fluentd allows you to unify data collection and consumption for a better use and understanding of data
https://www.fluentd.org/
Building an Open Data Platform: Logging with Fluentd and Elasticsearch
Cloud Data Logging with Raspberry Pi