Thursday, June 30, 2016

quantum computer


  • Quantum computing is computing using quantum-mechanical phenomena, such as superposition and entanglement

A quantum computer is a device that performs quantum computing.
Such a computer is completely different from binary digital electronic computers based on transistors and capacitors.
common digital computing requires that the data be encoded into binary digits (bits), each of which is always in one of two definite states (0 or 1),
quantum computation uses quantum bits or qubits, which can be in superpositions of states.

https://en.wikipedia.org/wiki/Quantum_computing
  • What is a quantum computer?

A quantum computer is a device able to manipulate delicate quantum states in a controlled fashion, not dissimilar from the way an ordinary computer manipulates its bits.

What does a quantum computer look like?
A quantum computer looks like nothing you have on your desk, or in your office, or in your pocket. It is housed in a large unit known as a dilution refrigerator and is supported by multiple racks of electronic pulse-generating equipment. However, you can access our quantum computer with very familiar personal computing devices, such as laptops, tablets, and smartphones.

What is a qubit?
A qubit (pronounced “cue-bit” and short for quantum bit) is the physical carrier of quantum information. It is the quantum version of a bit and its quantum state can take values of 0, 1, or both at once, which is a phenomenon known as superposition.

What is a superposition?
A superposition is a weighted sum or difference of two or more states; for example, the state of the air when two or more musical tones are sounding at once. Ordinary, or “classical,” superpositions commonly occur in macroscopic phenomena involving waves.

How is superposition different from probability?
A set of n coins, each of which might be heads or tails, can be described as a probabilistic mixture of states, but it actually is in only one of them—we just don’t know which. For this reason quantum superposition is more powerful than classical probabilism. Quantum computers capable of holding their data in superposition can solve some problems exponentially faster than any known deterministic or probabilistic classical algorithm. A more technical difference is that while probabilities must be positive (or zero), the weights in a superposition can be positive, negative, or even complex numbers.


http://www.research.ibm.com/quantum/


  • Today, real quantum computers can be accessed through the cloud, and many thousands of people have used them to learn, conduct research, and tackle new problems.

Quantum computers could one day provide breakthroughs in many disciplines, including materials and drug discovery, the optimization of complex systems, and artificial intelligence.

Quantum and Chemistry
For challenges above a certain size and complexity, we don’t have enough computational power on Earth to tackle them. To stand a chance at solving some of these complex problems, we need a new kind of computing: one whose computational power also scales exponentially as the system size grows.

What makes it ‘quantum’?
All computing systems rely on a fundamental ability to store and manipulate information. Current computers manipulate individual bits, which store information as binary 0 and 1 states.
Millions of bits work together to process and display information.
Quantum computers leverage different physical phenomena
superposition
entanglement
interference
to manipulate information.
To do this, we rely on different physical devices: quantum bits, or qubits

Superposition refers to a combination of states we would ordinarily describe independently. To make a classical analogy, if you play two musical notes at once, what you will hear is a superposition of the two notes
Entanglement is a famously counter-intuitive quantum phenomenon describing behavior we never see in the classical world. Entangled particles behave together as a system in ways that cannot be explained using classical logic
Quantum interference can be understood similarly to wave interference; when two waves are in phase, their amplitudes add, and when they are out of phase, their amplitudes cancel


In order to increase the computational power of quantum computing systems, improvements are needed along two dimensions. One is qubit count; the more qubits you have, the more states can in principle be manipulated and stored.
The second is to achieve lower error rates.
Combining these two concepts, we can create a single measure of a quantum computer’s power called quantum volume. Quantum volume measures the relationship between number and quality of qubits, circuit connectivity, and error rates of operations.

https://www.research.ibm.com/ibm-q/learn/what-is-quantum-computing/


  • Quantum Computation

Rather than store information using bits represented by 0s or 1s as conventional digital computers do, quantum computers use quantum bits, or qubits, to encode information as 0s, 1s, or both at the same time.
Capabilities
Optimization
Machine learning
Sampling / Monte Carlo
Pattern recognition and anomaly detection
Cyber security
Image analysis
Financial analysis
Software / hardware verification and validation
Bioinformatics / cancer research
https://www.dwavesys.com/quantum-computing

Tuesday, June 28, 2016

Windows Analysis


  • Windows Attack Surface

Attack Surface Analyzer takes a snapshot of your system state before and after the installation of product(s) and displays the changes to a number of key elements of the Windows attack surface.

This allows:
- Developers to view changes in the attack surface resulting from the introduction of their code on to the Windows platform
- IT Professionals to assess the aggregate Attack Surface change by the installation of an organization's line of business applications
- IT Security Auditors evaluate the risk of a particular piece of software installed on the Windows platform during threat risk reviews
- IT Security Incident Responders to gain a better understanding of the state of a systems security during investigations (if a baseline scan was taken of the system during the deployment phase)
https://blogs.microsoft.com/cybertrust/2012/08/02/microsofts-free-security-tools-attack-surface-analyzer/

Collecting attack surface information with .NET Framework 4 installed
C1. Download and install Attack Surface Analyzer on a machine with a freshly installed version of a supported operating system, as listed in the System Requirements section. Attack Surface Analyzer works best with a clean (freshly built) system. Not running the Attack Surface Analyzer on a freshly built system requires more time to perform scanning and analysis.
C2. Install any software prerequisite packages before the installation of your application.
C3. Run Attack Surface Analyzer from the Start menu or command-line. If Attack Surface Analyzer is launched from a non-elevated process, UAC will prompt you that Attack Surface Analyzer needs to elevate to Administrative privileges.
C4. When the Attack Surface Analyzer window is displayed, ensure the "Run new scan" action is selected, confirm the directory and filename you would like the Attack Surface data saved to and click Run Scan.
C5. Attack Surface Analyzer then takes a snapshot of your system state and stores this information in a Microsoft Cabinet (CAB) file. This scan is known as your baseline scan.
C6. Install your product(s), enabling as many options as possible and being sure to include options that you perceive may increase the attack surface of the machine. Examples include; if your product can install a Windows Service, includes the option to enable access through the Windows Firewall or install drivers.
C7. Run your application.
C8. Repeat steps C3 through C5. This scan will be known as your product scan.

Analyzing the Results
Note: You can either analyze the results on the computer you generated your scans from, or copy the CAB files to another computer for analysis. To perform analysis and report generation, a machine with .Net Framework 4 is required:
A1. Run Attack Surface Analyzer from the Start menu. If Attack Surface Analyzer is launched from a non-elevated process, UAC will prompt you that Attack Surface Analyzer needs to elevate to Administrative privileges. Note: To view the full list of command line options, including generating the report from the command line, execute the command: ““Attack Surface Analyzer.exe” /?” (without the surrounding quotation marks) from the console.
A2. Choose the "Generate Report" action and specify your baseline and product scan CAB files. Note: Make sure that you have the cab files selected for both baseline and product correctly, then generate report. Attack Surface Analyzer will inspect the contents of these files to identify changes in system state and if applicable important security issues that should be investigated. If a web browser is installed on the machine performing the analysis it should automatically load Attack Surface Analyzer's report - it is a HTML file.
A3. Review the report to ensure the changes are the minimum required for your product to function and are consistent with your threat model.

After addressing issues generated from the tool you should repeat the scanning process on a clean installation of Windows (that is, without the artifacts of your previous installation) and re-analyze the results. As you may need to repeat the process a number of times, we recommend using a virtual machine with "undo disks", differencing disks or the ability to revert to a prior virtual machine snapshot/configuration to perform your attack surface assessments.
https://www.microsoft.com/en-us/download/details.aspx?id=24487

Monday, June 27, 2016

Mind Mapping

FreeMind - free mind mapping software 
FreeMind is a premier free mind-mapping software written in Java. The recent development has hopefully turned it into high productivity tool
http://freemind.sourceforge.net/wiki/index.php/Main_Page


  • Freemind Free Mind Mapping Software Tutorial Mind Map 

https://www.youtube.com/watch?v=vlUdQTeZiNo


  • FreeMind Tutorial 

https://www.youtube.com/watch?v=grut_2cardM

Disaster Recovery-as-a-Service (DRaaS)

  • An Information System Contingency Plan (ISCP) is a pre-established plan for restoration of the services of a given information system after a disruption.

  • Stage 2: Disaster occurs
On a given point in time, disaster occurs and systems needs to be recovered. At this point the Recovery Point Objective (RPO) determines the maximum acceptable amount of data loss measured in time. For example, the maximum tolerable data loss is 15 minutes.

Stage 3: Recovery
At this stage the system are recovered and back online but not ready for production yet. The Recovery Time Objective (RTO) determines the maximum tolerable amount of time needed to bring all critical systems back online. This covers, for example, restore data from back-up or fix of a failure. In most cases this part is carried out by system administrator, network administrator, storage administrator etc.

Stage 4: Resume Production
At this stage all systems are recovered, integrity of the system or data is verified and all critical systems can resume normal operations. The Work Recovery Time (WRT) determines the maximum tolerable amount of time that is needed to verify the system and/or data integrity. This could be, for example, checking the databases and logs, making sure the applications or services are running and are available. In most cases those tasks are performed by application administrator, database administrator etc. When all systems affected by the disaster are verified and/or recovered, the environment is ready to resume the production again

The sum of RTO and WRT is defined as the Maximum Tolerable Downtime (MTD) which defines the total amount of time that a business process can be disrupted without causing any unacceptable consequences. This value should be defined by the business management team or someone like CTO, CIO or IT manager.
This is of course a simple example of a Business Continuity/Disaster Recovery plan and should be included in your Business Impact Analysis (BIA).

https://defaultreasoning.com/2013/12/10/rpo-rto-wrt-mtdwth/


  • Recovery Point Objective (RPO)

Maximum Tolerable Downtime (MTD)
Recovery Time Objective (RTO)


Recovery Point Objective (RPO)
The RPO is defined by business and departmental managers, and any designated data owners. It is described as a period of time. For example, the recovery point may be "one hour", "end of previous business day", or "one week". It is derived based on:
How often data is updated;
How much expense and/or effort would be required of your users to reconstruct data created or updated since the last backup, if possible;
If reconstruction wouldn't be possible, how much recent data your company can tolerate losing permanently, considering the likelihood of a catastrophic data loss event.

Maximum Tolerable Downtime (MTD)
MTD is the maximum amount of time an application or data can be unavailable to users, as specified by business management. This is based on the impact on business functions, and analysis of anticipated lost revenue and other costs that are incurred for every hour, day, or week a given application or database might be unavailable.
https://www.jdfoxexec.com/resource-center/articles/mtd-rto-rpo/
https://en.wikipedia.org/wiki/Information_System_Contingency_Plan


  • 5 WAYS TO TEST IT DISASTER RECOVERY PLANS


the five types of disaster recovery tests:

Paper test: Individuals read and annotate recovery plans.

Walkthrough test: Groups walk through plans to identify issues and changes.

Simulation: Groups go through a simulated disaster to identify whether emergency response plans are adequate.

Parallel test: Recovery systems are built/set up and tested to see if they can perform actual business transactions to support key processes. Primary systems still carry the full production workload.

Cutover test: Recovery systems are built/set up to assume the full production workload. You disconnect primary systems

https://www.dummies.com/programming/networking/5-ways-to-test-it-disaster-recovery-plans/


  • TESTING DISASTER RECOVERY PLANS


Structured Walk-Through Testing
During a structured walk-through test, disaster recovery team members meet to verbally walk through the specific steps of each component of the disaster recovery process as documented in the disaster recovery plan. The purpose of the structured walk-through test is to confirm the effectiveness of the plan and to identify gaps, bottlenecks or other weaknesses in the plan.
Checklist Testing
A checklist test determines if sufficient supplies are stored at the backup site, telephone number listings are current, quantities of forms are adequate, and a copy of the recovery plan and necessary operational manuals are available.
Simulation Testing
During this test, the organization simulates a disaster so normal operations will not be interrupted. A disaster scenario should take into consideration the purpose of the test, objectives, type of test, timing, scheduling, duration, test participants, assignments, constraints, assumptions, and test steps.
Parallel Testing
A parallel test can be performed in conjunction with the checklist test or simulation test. Under this scenario, historical transactions, such as yesterday’s transactions, are processed against the preceding day’s backup files at the contingency processing site or hot-site. All reports produced at the alternate site for the current business date should agree with those reports produced at the existing processing site.
Full-interruption Testing
A full-interruption test activates the total disaster recovery plan. This test is costly and could disrupt normal operations.
https://www.drj.com/drj-world-archives/dr-plan-testing/testing-disaster-recovery-plans.html
  • Disaster Recovery as a Service (DRaaS) is the replication and hosting of physical or virtual servers by a third-party to provide failover in the event of a man-made or natural catastrophe.

 Typically, DRaaS requirements and expectations are documented in a service-level agreement (SLA) and the third-party vendor provides failover to a cloud computing environment, either through a contract or pay-per-use basis
 http://whatis.techtarget.com/definition/disaster-recovery-as-a-service-DRaaS


  •  Veeam® enables Disaster Recovery-as-a-Service (DRaaS) as part of a comprehensive availability strategy, embracing investments made in your datacenter and extending them through the hybrid cloud.

 https://www.veeam.com/disaster-recovery-as-a-service-draas.html


  • High Availability

High availability is a feature which provides redundancy and fault tolerance

What is Redundancy
Redundancy is basically extra hardware or software that can be used as backup if the main hardware or software fails. Redundancy can be achieved via load clustering, failover, RAID, load balancing, high availabiltiy in an automated fashion. A higher layer of redundancy is achieved when the backup device is completely separate from the primary device. For example a backup internet line is provided by another ISP provider, so a completely separate physical link and connection from the primary internet connection, or a redundant piece of hardware which resides in another building.
http://www.internet-computer-security.com/Firewall/Failover.html


  • HIGH AVAILABILITY

A High Availability system is one that is designed to be available 99.999% of the time, or as close to it as possible. Usually this means configuring a failover system that can handle the same workloads as the primary system.
FAULT TOLERANCE
A Fault Tolerant system is extremely similar to HA, but goes one step further by guaranteeing zero downtime. HA still comes with a small portion of downtime, hence the ideal of a perfect HA strategy reaching “five nines” rather than 100% uptime. The time it takes for the intermediary layer, like the load balancer or hypervisor, to detect a problem and restart the VM can add up to minutes or even hours over the course of yearly runtime.
https://www.greenhousedata.com/blog/high-availability-vs-fault-tolerance-vs-disaster-recovery
  •  Disaster Recovery as a Service (DRaaS) from Node4 is ideal for companies who need continuous protection of the data and applications that are essential for the operation of their critical business functions.DRaaS is delivered using award-winning software from Zerto to replicate your virtual machines and maintain standby copies on N4Compute, our highly resilient Cloud virtualisation platform

 http://www.node4.co.uk/cloud/draas/

  • Fujitsu Backup as a Service (BaaS) provides a resilient, cloud-based backup and recovery service. Fujitsu Backup as a Service supports full system recovery, providing much more than folder and file backup and recovery. Delivered from FUJITSU Cloud, it offers the levels of speed, convenience and reliability demanded by organizations today. 

 http://www.fujitsu.com/global/services/infrastructure/iaas/baas/


  •  Backup as a Service (BaaS) provides backup and recovery operations from the cloud. The cloud-based BaaS provider maintains necessary backup equipment, applications, process and management in their data center. The customer will have some on-site installation – an appliance and backup agents are common – but there is no need to buy backup servers and software, run upgrades and patches, or purchase dedupe appliances.



  •  DRaaS/RaaS. Disaster Recovery as a Service, or more simply Recovery as a Service, offers more recovery options than the backup recovery of BaaS. BaaS will recover your backed up files, and RaaS recovers your files and applications within contracted RTO and/or RPO periods. It is more costly than BaaS but can be a good option if you do not want to perform your own storage infrastructure recovery in case of disaster.

 http://www.datamation.com/cloud-computing/backup-as-a-service-to-baas-or-not-to-baas-1.html


  •  Backup as a service (BaaS) is an approach to backing up data that involves purchasing backup and recovery services from an online data backup provider. Instead of performing backup with a centralized, on-premises IT department, BaaS connects systems to a private, public or hybrid cloud managed by the outside provider. Backup as a service is easier to manage than other offsite services. Instead of worrying about rotating and managing tapes or hard disks at an offsite location, data storage administrators can offload maintenance and management to the provider.

 http://searchdatabackup.techtarget.com/definition/backup-as-a-service-BaaS

  • What is disaster recovery?

Disaster recovery (DR) consists of IT technologies and best practices designed to prevent or minimize data loss and business disruption resulting from catastrophic events—everything from equipment failures and localized power outages to cyberattacks, civil emergencies, criminal or military attacks, and natural disasters.

Business continuity planning
Business continuity planning creates systems and processes to ensure that all areas of your enterprise will be able to maintain essential operations or be able to resume them as quickly as possible in the event of a crisis or emergency. Disaster recovery planning is the subset of business continuity planning that focuses on recovering IT infrastructure and systems.

Disaster recovery planning
Business impact analysis
The creation of a comprehensive disaster recovery plan begins with business impact analysis. When performing this analysis, you’ll create a series of detailed disaster scenarios that can then be used to predict the size and scope of the losses you’d incur if certain business processes were disrupted.
Risk analysis
Assessing the likelihood and potential consequences of the risks your business faces is also an essential component of disaster recovery planning
Prioritizing applications
Separate your systems and applications into three tiers, depending on how long you could stand to have them be down and how serious the consequences of data loss would be.
Documenting dependencies
The next step in disaster recovery planning is creating a complete inventory of your hardware and software assets. It’s essential to understand critical application interdependencies at this stage
Establishing recovery time objectives, recovery point objectives, and recovery consistency objectives
By considering your risk and business impact analyses, you should be able to establish objectives for how long you’d need it to take to bring systems back up, how much data you could stand to use, and how much data corruption or deviation you could tolerate.
Your recovery time objective (RTO) is the maximum amount of time it should take to restore application or system functioning following a service disruption.
Your recovery point objective (RPO) is the maximum age of the data that must be recovered in order for your business to resume regular operations.
A recovery consistency objective (RCO) is established in the service-level agreement (SLA) for continuous data protection services. It is a metric that indicates how many inconsistent entries in business data from recovered processes or systems are tolerable in disaster recovery situations
Regulatory compliance issues
All disaster recovery software and solutions that your enterprise have established must satisfy any data protection and security requirements that you’re mandated to adhere to
Choosing technologies
Backups serve as the foundation upon which any solid disaster recovery plan is built.
Choosing recovery site locations
On the one hand, a copy of your data should be stored somewhere that’s geographically distant enough from your headquarters or office locations that it won’t be affected by the same seismic events, environmental threats, or other hazards as your main site. On the other hand, backups stored offsite always take longer to restore from than those located on-premises at the primary site, and network latency can be even greater across longer distances.
Continuous testing and review
Simply put, if your disaster recovery plan has not been tested, it cannot be relied upon
All employees with relevant responsibilities should participate in the disaster recovery test exercise, which may include maintaining operations from the failover site for a period of time.
Disaster Recovery-as-a-Service (DRaaS)
Disaster-Recovery-as-a-Service (DRaaS) is one of the most popular and fast-growing managed IT service offerings available today. Your vendor will document RTOs and RPOs in a service-level agreement (SLA) that outlines your downtime limits and application recovery expectations.
Cloud DR
Most on-premises DR solutions will incur costs for hardware, power, labor for maintenance and administration, software, and network connectivity. In addition to the upfront capital expenditures involved in the initial setup of your DR environment, you’ll need to budget for regular software upgrades. Because your DR solution must remain compatible with your primary production environment, you’ll want to ensure that your DR solution has the same software versions.Depending upon the specifics of your licensing agreement, this might effectively double your software costs.
moving to a DRaaS subscription reduce your hardware and software expenditures, it can lower your labor costs by moving the burden of maintaining the failover site to the vendor.
https://www.ibm.com/cloud/learn/disaster-recovery



  •     Section 1. Example: Major goals of a disaster recovery plan


    Here are the major goals of a disaster recovery plan.
    Section 2. Example: Personnel

    You can use the tables in this topic to record your data processing personnel. You can include a copy of the organization chart with your plan.
    Section 3. Example: Application profile

    You can use the Display Software Resources (DSPSFWRSC) command to complete the table in this topic.
    Section 4. Example: Inventory profile

    You can use the Work with Hardware Products (WRKHDWPRD) command to complete the table in this topic.
    Section 5. Information services backup procedures

    Use these procedures for information services backup.
    Section 6. Disaster recovery procedures

    For any disaster recovery plan, these three elements should be addressed.
    Section 7. Recovery plan for mobile site

    This topic provides information about how to plan your recovery task at a mobile site.
    Section 8. Recovery plan for hot site

    An alternate hot site plan should provide for an alternative (backup) site. The alternate site has a backup system for temporary use while the home site is being reestablished.
    Section 9. Restoring the entire system

    You can learn how to restore the entire system.
    Section 10. Rebuilding process

    The management team must assess the damage and begin the reconstruction of a new data center.
    Section 11. Testing the disaster recovery plan

    In successful contingency planning, it is important to test and evaluate the plan regularly.
    Section 12. Disaster site rebuilding

    Use this information to do disaster site rebuilding.
    Section 13. Record of plan changes

    Keep your plan current, and keep records of changes to your configuration, your applications, and your backup schedules and procedures.

https://www.ibm.com/support/knowledgecenter/en/ssw_ibm_i_73/rzarm/rzarmdisastr.htm

Tuesday, June 21, 2016

Microsoft Active Directory Topology Diagrammer

  • ad topology diagrammer
https://www.youtube.com/watch?v=1hF9JJ6xHWI

  • The Microsoft Active Directory Topology Diagrammer reads an Active Directory configuration using LDAP, and then automatically generates a Visio diagram of your Active Directory and /or your Exchange Server topology. The diagramms may include domains, sites, servers, organizational units, DFS-R, administrative groups, routing groups and connectors and can be changed manually in Visio if needed
https://www.microsoft.com/en-us/download/details.aspx?id=13380#Instructions

size on disk

  • du gives two different results for the same file
When you run du without the --apparent-size you're getting the size based on the amount of disk's block space used, not the actual space consumed by the file(s).
This is a common problem when you put the same data on 2 different HDDs. You'll want to run the du command with and additional switch, assuming it has it - which it should given these are Linux nodes.
http://unix.stackexchange.com/questions/106275/du-gives-two-different-results-for-the-same-file

  • What’s the difference between my file size and the size on disk?
When you right-click to view the properties of a folder, the property sheet includes two values: Size and Size on disk.
The values reported by Size and Size on disk aren’t meant to be a byte-for-byte accounting of the total impact of a directory on your disk free space. They’re just a rough estimate based on the assumption that most files are of the boring variety. By that, I mean no hard links and negligible use of alternate data streams. If you have a directory with numerous hard links—such as the Windows directory itself, for example—the values will be way off.

Monday, June 20, 2016

Agile Vs. Lean: Yeah Yeah, What’s the Difference?

  • Lean
Lean comes from Lean Manufacturing and is a set of principles for achieving quality, speed & customer alignment
Agile
Agile refers to a set of values and principles put forth in the Agile Manifesto. The Manifesto was a reaction against heavyweight methodologies that were popular, yet crippling software projects from actually doing what they needed to do
http://hackerchick.com/agile-vs-lean-yeah-yeah-whats-the-difference

Credential Theft

  • Credential Theft and How to Secure Credentials
Prevent network logon for local accounts
Prevent access to in-memory credentials
Prevent credentials from remaining in-memory when connecting remotely
Leverage protected users and control privileged users
https://technet.microsoft.com/en-us/security/dn920237.aspx

  • Unofficial Guide to Mimikatz & Command Reference
Mimikatz is one of the best tools to gather credential data from Windows systems
https://adsecurity.org/?page_id=1821
  • Credential stuffing
Credential stuffing is the automated injection of breached username/password pairs in order to fraudulently gain access to user accounts. This is a subset of the brute force attack category: large numbers of spilled credentials are automatically entered into websites until they are potentially matched to an existing account, which the attacker can then hijack for their own purposes.

Anatomy of Attack

    The attacker acquires spilled usernames and passwords from a website breach or password dump site.
    The attacker uses an account checker to test the stolen credentials against many websites (for instance, social media sites or online marketplaces).
    Successful logins (usually 0.1-0.2% of the total login attempts) allow the attacker to take over the account matching the stolen credentials.
    The attacker drains stolen accounts of stored value, credit card numbers, and other personally identifiable information
    The attacker may also use account information going forward for other nefarious purposes (for example, to send spam or create further transactions)

https://www.owasp.org/index.php/Credential_stuffing

Difference Between CPU and MicroProcessor

  • Difference Between CPU and MicroProcessor
The technology of the microprocessor has become so advanced that it has the ability to contain not just one but up to four CPUs inside it
The GPU (Graphics Processing Unit) is also contained in a microprocessor
All CPUs are microprocessors, but not all microprocessors are CPUs.
http://www.differencebetween.net/technology/difference-between-cpu-and-microprocessor

  • The CPU is combined with memory and I/O on the same chip, creating a complete computer on a single chip. This is called a microcontroller (uC).
http://electronics.stackexchange.com/questions/44740/whats-the-difference-between-a-microprocessor-and-a-cpu

Manually creating a shortcut for the Web Start client

  • Manually creating a shortcut for the Web Start client
On Windows, the Web Start executable file for the default Java™ JVM is copied to a Windows system directory. When you let Web Start create a short cut for launching the desktop client, it uses the file in the system directory as the target. You can create a shortcut manually.
http://www.ibm.com/support/knowledgecenter/SSATHD_7.7.0/com.ibm.itm.doc_6.3/install/webstart_shortcut.htm

Virtualization Security

  • Microsoft fixes Hyper-V bug in Windows
Guests on a Hyper-V system could trigger the flaw in the CPU chip set to issue instructions that could place the host system into a nonresponsive state, resulting in a denial-of-service condition for guest operating systems. The attacker would have to first secure kernel-mode code execution privileges on the guest operating system in order to trigger this denial-of-service condition.
Unlike Xen and VMware, Hyper-V functions only on systems with hardware support for virtualization, such as servers with Intel VT-x and AMD-V hardware virtualization extensions. As a result, Hyper-V is typically not at risk for escape attacks, where the attackers target the guest system in order to compromise the host.
http://www.infoworld.com/article/3005238/security/microsoft-fixes-hyper-v-bug-in-windows.html


  • Common Virtualization Vulnerabilities and How to Mitigate Risks

VM escape:A guest OS escapes from its VM encapsulation to interact directly with the hypervisor.This gives the attacker access to all VMs and, if guest privileges are high enough, the host machine as well. Although few if any instances are known, experts consider VM escape to be the most serious threat to VM security.

How to Mitigate Risk
VM traffic monitoring:The ability to monitor VM backbone network traffic is critical.Conventional methods will not detect VM traffic because it is controlled by internal soft switches.However, hypervisors have effective monitoring tools that should be enabled and tested.
https://pentestlab.wordpress.com/2013/02/25/common-virtualization-vulnerabilities-and-how-to-mitigate-risks/

  • Top Virtualization Security Mistakes (and How to Avoid Them)
Mistake #1: Misconfiguring virtual hosting platforms, guests, and networks
Mistake #2: Failure to properly separate duties and deploy least privilege controls
Mistake #3: Failure to integrate into change/lifecycle management
Mistake #4: Failure to educate other groups, particularly risk management and compliance staff 
Mistake #5: Lack of availability or integration with existing tools and policies
Mistake #6: Lack VM visibility across the enterprise
Mistake #7: Failure to work with an open ecosystem
Mistake #8: Failure to coordinate policy between virtual machines and network connections   
Mistake #9: Failure to consider hidden costs
Mistake #10: Failure to consider user-installed VMs
https://www.sans.org/reading-room/whitepapers/analyst/top-virtualization-security-mistakes-and-avoid-them-34800

  • Kernel exploits
Unlike in a VM, the kernel is shared among all containers and the host, magnifying the importance of any vulnerabilities present in the kernel. Should a container cause a kernel panic, it will take down the whole host. In VMs, the situation is much better: an attacker would have to route an attack through both the VM kernel and the hypervisor before being able to touch the host kernel.
Denial-of-service attacks
If one container can monopolize access to certain resources–including memory and more esoteric resources such as user IDs (UIDs)—it can starve out other containers on the host, resulting in a denial-of-service (DoS), whereby legitimate users are unable to access part or all of the system.
Container breakouts
By default, users are not namespaced, so any process that breaks out of the container will have the same privileges on the host as it did in the container; if you were root in the container, you will be root on the host.2 This also means that you need to worry about potential privilege escalation attacks–whereby a user gains elevated privileges such as those of the root user, often through a bug in application code that needs to run with extra privileges.
Poisoned images
If an attacker can trick you into running his image, both the host and your data are at risk. Similarly, you want to be sure that the images you are running are up-to-date and do not contain versions of software with known vulnerabilities.
https://www.oreilly.com/ideas/five-security-concerns-when-using-docker

User and Entity Behavior Analytics ("UEBA")

  • User and Entity Behavior Analytics ("UEBA")
User Behavior Analytics ("UBA") as defined by Gartner, is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns - anomalies that indicate potential threats
User and Entity Behavior Analytics ("UEBA"). This expanded definition includes devices, applications, servers, data, or anything with an IP address.
https://en.wikipedia.org/wiki/User_behavior_analytics

  • user behavior analytics (UBA)
User behavior analytics (UBA) is the tracking, collecting and assessing of user data and activities using monitoring systems.
user behavior analytics tools have more advanced profiling and exception monitoring capabilities than SIEM systems and are used for two main functions. First, UBA tools determine a baseline of normal activities specific to the organization and its individual users. Second, they identify deviations from normal. UBA uses big data and machine learning algorithms to assess these deviations in near-real time.
http://searchsecurity.techtarget.com/definition/user-behavior-analytics-UBA

  • User Behavior Analytics ("UBA") as defined by Gartner, is a cybersecurity process about detection of insider threats, targeted attacks, and financial fraud. UBA solutions look at patterns of human behavior, and then apply algorithms and statistical analysis to detect meaningful anomalies from those patterns - anomalies that indicate potential threats
https://en.wikipedia.org/wiki/User_behavior_analytics

  • Defending Against Pass-The-Ticket Attacks
How Pass-the-Ticket Attacks Are Launched
Pass-the-Ticket attacks are typically launched in one of two ways:
The hacker steals a Ticket Granting Ticket or Service Ticket from a Windows machine and uses the stolen ticket to impersonate a user, or
The hacker steals a Ticket Granting Ticket or Service Ticket by compromising a server that performs authorization on the users’ behalf.
http://www.identityweek.com/defending-against-pass-the-ticket-attacks/

  • Windows Credentials Editor (WCE) – List, Add & Change Logon Sessions
Perform Pass-the-Hash on Windows
‘Steal’ NTLM credentials from memory (with and without code injection)
‘Steal’ Kerberos Tickets from Windows machines
Use the ‘stolen’ kerberos Tickets on other Windows or Unix machines to gain access to systems and services
Dump cleartext passwords stored by Windows authentication packages
http://www.darknet.org.uk/2015/02/windows-credentials-editor-wce-list-add-change-logon-sessions


  • Windows Credentials Editor
Windows Credentials Editor (WCE) is a security tool to list logon sessions and add, change, list and delete associated credentials (ex.: LM/NT hashes, plaintext passwords and Kerberos tickets).
This tool can be used, for example, to perform pass-the-hash on Windows, obtain NT/LM hashes from memory (from interactive logons, services, remote desktop connections, etc.), obtain Kerberos tickets and reuse them in other Windows or Unix systems and dump cleartext passwords entered by users at logon.
WCE is a security tool widely used by security professionals to assess the security of Windows networks via Penetration Testing. It supports Windows XP, 2003, Vista, 7, 2008 and Windows 8.
http://www.ampliasecurity.com/research/windows-credentials-editor/

  • Using WCE (Windows Credential Editor)

C:\Users\Ale\Desktop>wce -l

WCE v1.4beta (X64) (Windows Credentials Editor) – (c) 2010-2013 Amplia Security

– by Hernan Ochoa (hernan@ampliasecurity.com)

Ale:WIN71_64:960407EE2F0ED879AAD3B435B51404EE:95947E88DC144165EEC12CC2039E56B6



C:\Users\Ale\Desktop>wce -w

WCE v1.4beta (X64) (Windows Credentials Editor) – (c) 2010-2013 Amplia Security

– by Hernan Ochoa (hernan@ampliasecurity.com)

Ale\WIN71_64:ceh123!
https://alexandreborges.org/2014/02/14/using-wce-windows-credential-editor


  • Pass the hash
In cryptanalysis and computer security, pass the hash is a hacking technique that allows an attacker to authenticate to a remote server/service by using the underlying NTLM and/or LanMan hash of a user's password, instead of requiring the associated plaintext password as is normally the case.
https://en.wikipedia.org/wiki/Pass_the_hash


  • UEBA is new class of security technology that is designed to identify next-generation security threats that have penetrated traditional firewalls and other perimeter systems. 
"User and Entity Behavior Analytics offers profiling and anomaly detection based on a range of analytics approaches, usually using a combination of basic analytics methods and advanced analytics…
Examples of these activities include unusual access to systems and data by trusted insiders or third parties, and breaches by external attackers evading preventative security controls.
The Niara behavioral analytics solution seamlessly integrates with the ClearPass network security platform to create the industry's most complete visibility and attack detection system.
The Niara behavioral analytics solution seamlessly integrates with the ClearPass network security platform to create the industry's most complete visibility and attack detection system.
The Niara behavioral analytics solution seamlessly integrates with the ClearPass network security platform to create the industry's most complete visibility and attack detection system.   
http://www.marketwired.com/press-release/hpe-acquires-niara-to-enhance-security-at-the-intelligent-edge-nyse-hpe-2192822.htm

pcap analysis

  • Exposing One of China’s Cyber Espionage Units
aPt1: attaCk LIFeCyCLe
They begin with aggressive spear phishing, proceed to deploy custom digital weapons, and end by exporting compressed bundles of files to China – before beginning the cycle again.
These attacks fit into a cyclic pattern of activity that we will describe in this section within the framework of Mandiant’s
Attack Lifecycle model. In each stage we will discuss APT1’s specific techniques to illustrate their tenacity and the
scale at which they operate.
http://intelreport.mandiant.com/Mandiant_APT1_Report.pdf

  • 8 cyber security technologies DHS is trying to commercialize

REnigma
This software runs malware within a virtual machine and records what it does so it can be played back and analyzed in detail.

Socrates
This software platform automatically seeks patterns in data sets, and can tease out those that represent cyber threats.

PcapDB
This is a software database system that captures packets to analyze network traffic by first organizing packet traffic into flows.

REDUCE
This is a software analysis tool to reveal relationships between malware samples and to develop signatures that can be used to identify threats.

Dynamic Flow Isolation
DFI leverages software defined networking to apply security policies on-demand based on current operational state or business needs.

TRACER
Timely Randomization Applied to Commodity Executables at Runtime (TRACER) is a means to alter the internal layout and data of closed-source Windows applications such as Adobe Reader, Internet Explorer, Java and Flash.

FLOWER
Network FLOW AnalyzER inspects IP packet headers to gather data about bi-directional flows that can be used to identify baseline traffic and abnormal flows as a way to spot potential breaches and insider threats.

SilentAlarm
This platform analyzes network behaviors to identify likely malicious behavior to stop attacks including zero-days for which there are no signatures.

http://www.networkworld.com/article/3056624/security/8-cyber-security-technologies-dhs-is-trying-to-commercialize.html


  • Inspection of packet captures

PCAP-for signs of intrusions, is a typical everyday task for security analysts and an essential skill analysts should develop. Malwares have many
ways to hide their activities on the system level (i.e. Rootkits), but at the end, they must leave a visible trace on the network level, regardless if it's obfuscated or encrypted. This paper guides the reader through a structured way to analyze a PCAP trace, dissect it using Bro Network Security Monitor (Bro) to facilitate active threat hunting in an efficient time to detect possible intrusions. The detection arm itself can be broken down into two major parts, reactive and proactive On the network level –the scope of this paper, one widespread reactive detection example is SNORT (SANS, n.d.), which used to be an effective approach, but it has two significant shortcomings.Firstly, SNORT depends on static signatures, which determined attackers could easily bypass. The second is that security analysts operate into
a more passive mode, waiting for something malicious to happen that might –or might not- trigger an alert and only then, an investigatio attacks have evolved and require more than traditional NIDS –reactive detection- to detect adversaries (Ashford, n.d.). Active detection (aka threat hunting) was
introduced to fill this gap.

https://www.sans.org/reading-room/whitepapers/threathunting/hunting-threats-packet-captures-37765


  • Source Routing
Source Routing is a technique whereby the sender of a packet can specify the route that a packet should take through the network
network administrators block all source-routed packets at their border routers.

Unless a network depends on it, source routing should be disabled.
Source routing is a technique whereby the sender of a packet can specify the route that a packet should take through the network. As a packet travels through the network, each router will examine the destination IP address and choose the next hop to forward the packet to. In source routing, the "source" (i.e., the sender) makes some or all of these decisions.
Attackers can use source routing to probe the network by forcing packets into specific parts of the network. Using source routing, an attacker can collect information about a network's topology, or other information that could be useful in performing an attack. During an attack, an attacker could use source routing to direct packets to bypass existing security restrictions.
https://superuser.com/questions/924633/why-doesnt-ping-j-work

  • Source routing has been around for a very long time. In fact, it’s a part of the specification of the IP protocol.
many network engineers fail to understand the potential dangers in allowing source routed packets to pass through internal routers.
http://www.enclaveforensics.com/Blog/files/dbe04629c14a2d07495a38bbf2fc98d9-5.html

  • Wireshark
Wireshark is the world's foremost network protocol analyzer. It lets you see what's happening on your network at a microscopic level.
http://www.wireshark.org
  • Wireshark shows all the action in the bottom pane like this:
    Frame (Physical Layer)
    Ethernet II (Data Link Layer)
    Internet Protocol Version 4 (Network Layer)
    User Datagram Protocol (Transport Layer)
    Domain Name System (response) Application Layer

So here’s the big review:

    Routers are layer 3 devices because they make forwarding decisions based on layer 3 addresses.
    Switches are considered layer 2 devices because they make forwarding decisions based on layer 2 addresses.
    Hubs, NICS, Wi-Fi cards, cables, and connectors are at layer 1.

Layer 2 has MAC addresses, the NIC is also a Layer 2 device because it has the MAC address.  Switches are bridges with more ports they also work at layer 2 since they understand physical addresses.

At Layer 3 we use IPv4 and IPv6.  Routers live here and the protocol data units (PDUs) used here are called Packets.

And here’s a quick review of the terms:

    MAC address and Physical Address and Layer 2 addresses are the same thing.
    Frames are Protocol Data Units (PDUs) at Layer 2
    Packets are PDUs at Layer 3
    Segments are PDUs at Layer 4
    Data is just called a PDU at the Application Layer


http://www.fixedbyvonnie.com/2015/05/networking-101-layers-part-3-of-3/#.WhVetjdRWUk


  • wireshark
Use ping -l 2500 <default gateway address> to ping the default gateway address with a 2,500 byte packet. Notice that because the default maximum transmission unit (MTU) for Ethernet frames is 1,500 bytes, this should generate fragmented packets
https://en.wikiversity.org/wiki/Wireshark/IPv4_fragments

  • Packets 8, 9, 10, 11
These are the four “handshake” WPA packets.
These are the four critical packets required by aircrack-ng to crack WPA using a dictionary.
The first pair of packets has a “replay counter” value of 1.
The second pair has a “replay counter” value of 2.
Packets with the same “replay counter” value are matching sets.

 If you have only one packet for a specific “replay counter” value then you are missing it from the capture and packet you do have cannot be used by aircrack-ng. That is why sometimes you have four EAPOL packets in your capture but aircrack-ng still says there are “0” handshakes. You must have matching pairs.

EAPOL packets 1 and 3 should have the same nonce value. If they don't, then they are not part of the matching set.

Aircrack-ng also requires a valid beacon. Ensure this beacon is part of the same packet sequence numbers. For example, if the beacon packet sequence number is higher then the EAPOL packet sequence numbers from the AP, the handshake will be ignored. This is because the aircrack-ng “resets” handshake sets when association packets and similar are seen.

Packets 12, 13, 14, 15
These are data packets to/from the wireless client to the LAN via the AP. You can view the TKIP Parameters field to confirm that WPA is used for these packets:

In Wireshark, use “eapol” as a filter. This will show only handshake packets and is useful for analyzing why you don't have the full handshake

http://aircrack-ng.org/doku.php?id=wpa_capture

  • tcpdump -nnvvS src 172.5.2.3 and dst port 3389


tcpdump -nnvvS src 172.22.92.62 and dst port 80

tcpdump -nnvvS src 172.22.92.62 and dst port 80 -w capture2 -i wlo1

tcpdump -nnvvS not src 172.22.92.62 and dst port 80 -w capture2

tcpdump -i wlo1 port 80 -w capture1

sudo tcpdump -i wlo1 port 80 -w capture1

  • netcat

Netcat is a computer networking service for reading from and writing network connections using TCP or UDP. Netcat is designed to be a dependable “back-end” device that can be used directly or easily driven by other programs and scripts
http://en.wikipedia.org/wiki/Netcat

Port Scanning with Netcat
For port scanning with Netcat use the following syntax:

nc –[options] hostname [ports]

As we said, you scan use range, commas and name of port for scanning. Below we show you some examples:

nc –v 192.168.1.4 21, 80, 443
nc –v 192.168.1.4 1-200
nc –v 192.168.1.4 http

http://linux.devicegadget.com/attack/netcat/167/


  • hping
hping is a command-line oriented TCP/IP packet assembler/analyzer.
http://www.hping.org/ 


  • PassiveDNS sniffs traffic from an interface or reads a pcap-file and outputs

the DNS-server answers to a log file. PassiveDNS can cache/aggregate duplicate
DNS answers in-memory, limiting the amount of data in the logfile without
losing the essence in the DNS answer.

https://github.com/gamelinux/passivedns

  • CIRCL Passive DNS is a database storing historical DNS records from various resources including malware analysis or partners. The DNS historical data is indexed, which makes it searchable for incident handlers, security analysts or researchers.

https://www.circl.lu/services/passive-dns/
  • TCP reset attack
TCP reset attack, also known as "forged TCP resets", "spoofed TCP reset packets" or "TCP reset attacks", is a way to tamper and terminate the Internet connection by sending a forged TCP reset packet. This tampering technique can be used by a firewall in goodwill, or abused by a malicious attacker to interrupt Internet connections.
https://en.wikipedia.org/wiki/TCP_reset_attack