Monday, April 7, 2014

spring managed vs jsf managed

  • spring managed beans vs jsf managed beans?

your beans should be completely managed either by JSF or by Spring.

Many web applications consist from several 'layers' also called as 'tiers' of the application: web tier, or presentation tier for viewing pages of your application, business tier, or middle tier for executing logic and business rules of your appication and data tier, or persistece tier for tranferring data to/from your database. These tiers might have the following configuratio



  1.     Entity classes that will hold data derived from your database and most plausibly used by an ORM framework like Hibernate;
  2.     DAO classes that will be used to access database and at least perform CRUD operations on the database and most importantly for your web part return Entity classes for your web tier;
  3.     Service classes that will reflect business operations you application provides for;
  4.     Bean classes that will back up your views and will most probably contain data, action methods, transformations etc. used in your web pages.

    The next step is the choice of framework for your web application.
   
  1.     You choose Spring for all layers which means that your DAOs will be @Repository classes, your Services will be @Service classes and your Beans will be @Component classes. You will most probably use an ORM framework like Hibernate to deal with the database, so your Entities will be JPA @Entity classes properly configurated in Hibernate style. Your view technology will most probably be Spring MVC that was elaborated to work with Spring core 
  2.     You choose native JSF+EJB framework for all layers which means that your DAOs and Services will be @EJB classes your beans will be @ManagedBean classes. You will most probably also use Hibernate as ORM solution and JPA provider and will do database access via EntityManager. Your view technology will be JSF as it was naturally intended to be used with the abovementioned technologies.
   
   
    Spring is a lightweight container that will run on simple servlet containers like Tomcat whereas EJBs need an application server like Glassfish to run on. I think that this is the major driving force for combining JSF as a component-based web framework and Spring as a lightweight dependency injection and business tier framework.
   
   
    As we decided to integrate both frameworks together, I will explain how the integration works and why NPEs occur.

  1.     Entity classes will either be JPA/Hibernate annotated classes or simple POJOs configured by xml.
  2.     DAOs will be @Repository implementing base interfaces to avoid tight coupling. They will be managed by the Spring framework.
  3.     Services will be @Service also implementing base interfaces. They will also be managed by the Spring framework. Note that Spring framework will provide for out-of-the-box transaction management for you if you mark service methods with @Transactional.
  4.     Beans therefore must be @Component and @Scope("value") and must be managed by Spring if you want to use it as a dependency injection framework, allowing to access your services and other beans via @Autowired.

   
    So, the NPE stems from misunderstanding that your beans, as a logical part of the view, should be managed by JSF (note that @ManagedProperty wouldn't work as well). The bean gets instantiated by JSF, but your service resides in Spring context that JSF knows noting about, making injection not possible. On the other hand, if the bean remains within Spring context, its lifecycle and dependencies will be injected by Spring
   
   
    So, to make it work, mark the bean as

    @Component
    @Scope("request")
    public class SpringManagedBeanToBeUsedByJSF {

        ...

        @Autowired
        private SpringService springService;

        ...

    }
   
    and make all the prerequisites of using Spring with JSF.
    Consult this excellent example
    http://www.mkyong.com/jsf2/jsf-2-0-spring-hibernate-integration-example/
    This way, all of the beans will be managed by Spring and will be visible in JSF views when you attach EL-resolver in faces-config.xml (allowing JSF to 'see' Spring beans) and necessary listeners in web.xml.
    When you do it like this, all of the Spring beans can be referenced in .xhtml files and if you need to put the JSF action in the bean, just go ahead and place them in the (Spring) managed beans or make them implement vital to JSF interfaces, etc.
    The integration can be achieved only this way. Of course, you can also use JSF managed beans, @FacesConverter and @FacesValidator classes in the application as well, just do not interfere them with each other, but using two dependency injection frameworks withing one application is at least confusing.
   
    http://stackoverflow.com/questions/14766345/spring-dao-is-not-injected-in-jsf-managed-bean



  • To make a Java bean a Spring-managed service bean, you use the @Component or @Service annotation.


    @Component: a generic stereotype for any Spring-managed component.
    @Repository: used in the persistence layer to declare a Spring-managed DAO component.
    @Service: used in the service layer to declare a Spring-managed business service facade.
    @Controller: used in the presentation layer to declare a Spring-managed controller, for example a web controller.

   
    http://steveschols.wordpress.com/2011/07/28/spring-factorybean-managed-wiring-part-2/
   
   
    JSF is a component based web framework with an emphasis on MVC. Spring is a Dependency Injection and Inversion of Control framework that is not exclusive to web applications.
   
    If you don't understand these three terms are:

    Component based web framework

    Dependency Injection

    Inversion of Control

Then my suggestion is that you just stop what you are doing and immediately begin reading.


JSF as a standalone framework maintains the scope of its own managed beans without the need for a seperate DI framework. When introducing Spring however then there are naturally going to be conflicts. Spring manages its own Beans apart from JSF, so to reference these ManagedBeans and have business objects or DAO's be properly injected into them for use, the JSF ManagedBeans need to become Spring Controllers.


You can declare a JSF ManagedBean with the @Controller annotation. Spring 3 is smart enough to recognize that it is a JSF managed bean and the bean name will be whatever the name is declared as for the ManagedBean.

@Controller
@Scope("session")
@ManagedBean(name="testBean")

The EL Resolver does basically just that, it resolves EL expressions encountered on your XHTML/JSF page. When referencing testBean however it will not be able to resolve this name correctly as it is referring to a JSF managed bean by that name, and will not be able to find the Spring Controller with all the Spring injected dependencies that you need.

Spring 3 solves this problem by providing you a custom EL Resolver to use in place of the one that comes bundled with your JSF implementation. You can declare it to be used in faces-config.xml

<application>
   <el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>
</application>


If you are just integrating JSF + Spring without the need for any other Spring controlled Servlets or without the need for Spring Security integration then no you do not need anything additional in your web.xml. You would only need to declare the FacesServlet and its context params, plus any other third party component library servlets that may be necessary for your situation.

http://stackoverflow.com/questions/12317288/how-to-declare-a-jsf-managed-bean-in-a-spring-3-1-application



  • spring managed?

@Component
@Scope("session")
public class AddressBean {
}

vs

jsf managed?
@ManagedBean
@SessionScoped
public class AddressBean {
}


If you were already using Spring for IoC, and if you were to, say, deem it 'cleaner' or 'easier' to use one IoC container for the management of the beans in all of your layers, it is possible to add a resolver to your faces-config.xml file in order to instruct JSF to utilise the Spring container instead:
<el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>


If you are using JSF and have JSF pages (views) and such, then it could be JSF managed beans.
Likewise if you have a Spring MVC project it would be Spring beans.

http://stackoverflow.com/questions/13987826/spring-3-vs-jsf-2-managed-beans



  • Injecting Spring Beans into JSF 2 @ManagedBean Classes

This particular example is taken from some PoC work done with JSF 2 and Spring 3 running within a Tomcat 6 server (on JDK 5). The reason I wanted to inject Spring references into my JSF managed beans was both for ease of use, and because we haven’t made a decision architecturally (yet) to use JAX-WS or Hessian web services, and the only change would be the Spring configuration of the service definition.

I then attempted to get Spring to manage the bean, and found that JSF is able to delegate resolution of managed beans to an external resolver if it is configured correctly. In order to achieve this functionality the following needs to be done:

web.xml
<listener>
    <listener-class>org.springframework.web.context.request.RequestContextListener</listener-class>
</listener>

faces-config.xml
<application>
    <el-resolver>org.springframework.web.jsf.el.SpringBeanFacesELResolver</el-resolver>
</application>

Each backing bean into which you wish to inject Spring references into needs to be managed by Spring, so need to be annotated with Spring stereotypes.
page1_backing.java
package com.test.jsf;

@ManagedBean
@Controller
@Scope(value = "request")
public class page1_backing
{
    @Autowired
    private UserInfo ui;

    ...
}

The @Autowired annotation is a Spring annotation indicating that the reference will be injected when the Spring bean is created. As the bean is scoped to “request”, the managed bean will be created each time it is referenced by the JSF request lifecycle. In this example, the UserInfo class is a simple POJO that contains user-sensitive data such as forename, surname etc.


The final piece of the puzzle is to add the component-scan annotation to the Spring configuration file so that stereotypes are automatically picked up when the Spring context is loaded.

application-context.xml
1
   
<context:component-scan base-package="com.test.jsf" />

http://deevodavis.wordpress.com/injecting-spring-beans-into-jsf-2-managedbean-classes/

computer forensics

  • what is computer forensics

    the interest of figuring out what happened, when it happened, how it happened, and who was involved.

    This can be for the purpose of performing a root cause analysis of a computer system that had failed or is not operating properly,

    or to find out who is responsible for misuse of computer systems

    or perhaps who committed a crime using a computer system or against a computer system

    computer forensic techniques and methodologies are commonly used for conducting computing investigations

    Think about a murder case or a case of financial fraud. What do the investigators involved in these cases need to ascertain? What happened, when did it happen, how did it happen, and who was involved.


    The preservation, identification, extraction, interpretation, and documentation of computer evidence, to include the rules of evidence, legal processes, integrity of evidence, factual reporting of the information found, and providing expert opinion in a court of law or other legal and/or administrative proceeding as to what was found.


    References:

    http://www.csisite.net/forensics.htm
    http://www.computerforensicsworld.com
    http://www.craigball.com
    http://en.wikipedia.org/wiki/Computer_forensics
    http://swizardb.blogspot.com/search/label/Computer%20Forensics


    • Computer forensics is the application of investigation and analysis techniques to gather and preserve evidence from a particular computing device in a way that is suitable for presentation in a court of law. The goal of computer forensics is to perform a structured investigation while maintaining a documented chain of evidence to find out exactly what happened on a computing device and who was responsible for it.
    http://searchsecurity.techtarget.com/definition/computer-forensics


    • Computer forensics is the practice of collecting, analysing and reporting on digital data in a way that is legally admissible.
    https://forensiccontrol.com/resources/beginners-guide-computer-forensics/
  • The Open Computer Forensics Architecture (OCFA) is an distributed open-source computer forensics framework used to analyze digital media within a digital forensics laboratory environment. The framework was built by the Dutch national police.

  • https://en.wikipedia.org/wiki/Open_Computer_Forensics_Architecture

  • Open Computer Forensics Architecture

  • The Open Computer Forensics Architecture (OCFA) is a modular computer forensics framework built by the Dutch National Police Agency [KLPD/Dutch]. The main goal is to automate the digital forensic process to speed up the investigation and give tactical investigators direct access to the seized data through an easy to use search and browse interface.
    http://ocfa.sourceforge.net/

  • The Open Computer Forensics Architecture (OCFA) is a modular computer forensics framework built by the "Dutch National Police Agency". The main goal is to automate the digital forensic process to speed up the investigation and give tactical investigators direct access to the seized data through an easy to use search and browse interface...
  • http://www.forensicfocus.com/index.php?name=News&file=article&sid=477
     Exploring the Open Computer Forensics Architecture
    Automate the forensics process with the Dutch police department's Open Computer Forensics Architecture. http://www.linux-magazine.com/Issues/2008/93/OCFA

  •  DFF (Digital Forensics Framework) is a free and Open Source computer forensics software built on top of a dedicated Application Programming Interface (API). It can be used both by professional and non-expert people in order to quickly and easily collect, preserve and reveal digital evidence without compromising systems and data. - See more at http://www.toolwar.com/2014/06/dff-digital-forensics-framework.html#sthash.gC97vxd2.dpuf

  • Digital Forensics Framework
  • DFF is an Open Source computer forensics platform built on top of a dedicated Application Programming Interface (API). DFF proposes an alternative to the aging digital forensics solutions used today. Designed for simple use and automation, the DFF interface guides the user through the main steps of a digital investigation so it can be used by both professional and non-expert to quickly and easily conduct a digital investigation and perform incident response.
    http://www.arxsys.fr/

  •  Preserve digital chain of custody: Software write blocker, the cryptographic hash calculation.

  •    Access to local and remote devices: Disk drives, removable devices, remote file systems
        Read standard digital forensics file formats: Raw, Encase EWF, AFF 3 file formats
        Virtual machine disk reconstruction: VMWare (VMDK) compatible
        Windows and Linux OS forensics: Registry, Mailboxes, NTFS, EXTFS 2/3/4, FAT 12/16/32 file systems
        Quickly triage and search for (meta-)data: Regular expressions, dictionaries, content search, tags, timeline.
        Recover hidden and deleted artifacts: Deleted files/folders, unallocated spaces, carving
        Volatile memory forensics: Processes, local files, binary extraction, network connections
    http://tools.kali.org/forensics/dff







  • AlmaNebula, a conceptual framework for the analysis of digital evidence built on top of a Cloud infrastructure, which aims to embody the concept of “Forensics as a service”.

  • http://www.sciencedirect.com/science/article/pii/S1877050913006315

  • EnCase is a suite of digital forensics products by Guidance Software. The software comes in several forms designed for forensic, cyber security and e-discovery use.
http://en.wikipedia.org/wiki/EnCase


  • Built on the EnCase Enterprise platform are market-leading electronic discovery and cyber security solutions, EnCase eDiscovery, EnCase Cybersecurity, and EnCase Analytics. They empower organizations to respond to litigation discovery requests, perform sensitive data discovery for compliance purposes, conduct a speedy and thorough security incident response, and reveal previously hidden advanced persistent threats or malicious insider activity.

http://www.guidancesoftware.com/


  • Forensic Toolkit- FTK
FTK is a court-accepted digital investigations platform built for speed, stability, and ease of use
http://www.accessdata.com/products/digital-forensics/ftk

  • IBM i2 provides intelligence analysis, law enforcement and fraud investigation solutions. i2 offerings deliver flexible capabilities that help combat crime, terrorism and fraudulent activity.
http://www-01.ibm.com/software/info/i2software/


  •  Autopsy® is a digital forensics platform and graphical interface to The Sleuth Kit® and other digital forensics tools. It is used by law enforcement, military, and corporate examiners to investigate what happened on a computer

 http://www.sleuthkit.org/autopsy/



  •  The Sleuth Kit® is a collection of command line tools and a C library that allows you to analyze disk images and recover files from them. It is used behind the scenes in Autopsy and many other open source and commercial forensics tools.

 http://www.sleuthkit.org/


  • SANS Investigative Forensic Toolkit (SIFT) Workstation Version 3

the SANS Incident Forensic Toolkit (SIFT) Workstation for incident response and digital forensics use and made it available to the whole community as a public service. The free SIFT toolkit, that can match any modern incident response and forensic tool suite, is also featured in SANS' Advanced Incident Response course (FOR 508)
http://digital-forensics.sans.org/community/downloads


  • The Volatility Framework

The Volatility Foundation is an independent 501(c) (3) non-profit organization that maintains and promotes open source memory forensics with The Volatility Framework.
http://www.volatilityfoundation.org/



  • FTK Imager

FTK Imager is a data preview and imaging tool that allows you to examine files and folders on local hard drives, network drives, CDs/DVDs, and review the content of forensic images or memory dumps
http://accessdata.com/product-download/digital-forensics/ftk-imager-lite-version-3.1.1


  • dc3dd

A patch to the GNU dd program, this version has several features intended for forensic acquisition of data. Highlights include hashing on-the-fly, split output files, pattern writing, a progress meter, and file verification.
https://sourceforge.net/projects/dc3dd/


  • CAINE (Computer Aided INvestigative Environment) is an Italian GNU/Linux live distribution created as a Digital Forensics projec

http://www.caine-live.net/


  • bulk_extractor

bulk_extractor is a program that extracts features such as email addresses, credit card numbers, URLs, and other types of information from digital evidence files. It is a useful forensic investigation tool for many tasks such as malware and intrusion investigations, identity investigations and cyber investigations, as well as analyzing imagery and pass-word cracking

http://tools.kali.org/forensics/bulk-extractor


  •  Guymager is contained on several live CDs and VMs. Some of them are updated more often than others. Take care to chose one with a recent version of Guymager.forensic imager for media acquisition


http://guymager.sourceforge.net/

  • libyal is a collection of libraries that are used to access various data formats, such as the OLE Compound File or NT File System. The original use case for the libraries is for analyzing data formats or their content for analysis in the context of digital forensics and incident response (DFIR).
https://github.com/libyal/libyal/wiki
  • a Python-based backend engine for the tool log2timeline.
log2timeline is a tool designed to extract timestamps from various files found on a typical computer system(s) and aggregate them
The initial purpose of plaso was to have the timestamps in a single place for computer forensic analysis (aka Super Timeline).
https://github.com/log2timeline/plaso/wiki

  • plaso is a Python-based backend engine for the tool log2timeline.
https://github.com/log2timeline/plaso
  • Rekall is an advanced forensic and incident response framework. While it began life purely as a memory forensic framework, it has now evolved into a complete platform
http://www.rekall-forensic.com/

  • The Rekall Framework is a completely open collection of tools, implemented in Python under the Apache and GNU General Public License, for the extraction and analysis of digital artifacts computer systems.
https://github.com/google/rekall


  • Exercise 2 - Track User Mode Process Allocations


Heap allocations are made directly via Heap APIs (HeapAlloc, HeapRealloc, and C/C++ allocations such as new, alloc, realloc, calloc) and are serviced using three types of heaps:

  1. Mainline NT Heap – Services allocation requests of sizes less than 64 KB.
  2. Low Fragmentation Heap – Composed of sub-segments that service allocation requests of fixed size blocks.
  3. VirtualAlloc – Services allocation requests of sizes greater than 64 KB.


VirtualAlloc is used for large dynamic memory allocations that are made directly via the VirtualAlloc API. The typical usage is usually for bitmaps or buffers. You can use VirtualAlloc to reserve a block of pages and then make additional calls to VirtualAlloc to commit individual pages from the reserved block. This enables a process to reserve a range of its virtual address space without consuming physical storage until it is needed.

There are two concepts to understand in this area:


  1. Reserved memory: Reserves an address range for usage but does not acquire memory resources.



  1. Committed memory: Ensures that either physical memory or page file space will be available if the addresses are referenced.

https://docs.microsoft.com/en-us/windows-hardware/test/wpt/memory-footprint-optimization-exercise-2


VirtualAlloc is a specialized allocation of the Windows virtual memory system, meaning it allocates straight into virtual memory via reserved blocks of memory
HeapAlloc allocates any size of memory that is requested dynamically in Windows, and is a concept of Microsoft Windows.

Heaps are set up by VirtualAlloc and are used to initially reserve allocation space from the operating system.
  • Forensic Analysis of Windows User-Space Applications Through Heap Allocations


Why Userspace analysis?
Forensically very valuable
Users interact directly with applications.
Applications interact with the OS kernel.
Therefore we can sometimes infer user activity by OS kernel evidence but not always
e.g: user chats on IRC, Sockets, Connections, network packets, Strings in IRC process - no context

Challenges for user-space analysis
So many userspace applications - manual reversing just does not scale
Userspace memory is often paged and address translation is more complex;
Current tools and techniques are unable to resolve
userspace memory from Prototype PTEs or the Pagefile
Why is page translation in userspace fairly complex?
Have to consider shared memory (Prototype PTEs).
Some memory forensic tools are extremely buggy
Associate random data with the content of user space memory. (Very dangerous from an evidentiary perspective.).

Conclusions
For the first time, a FOSS memory analysis framework supports reliable user space address translation
Prototype PTE, Page file, Transitioned PDEs etc.
High-quality address translation is essential in order to reliably parse heap structures
Thorough heap analysis enables seeing memory through an app's own abstractions.


https://pdfs.semanticscholar.org/aed3/087a4f3c36dc4e1becfa8cc5b9fb0af4d6fa.pdf


  • Incident Forensics Lifecycle


GCTI certification
CTI - cyber threat intelligence
Diamond Model and Cyber Kill-Chain

Incident response lifecycle 
preparation
identification
recovery
lessons learn

Digital forensics lifecycle 
collection
examination
analysis
reporting

Cyber Kill-Chain
used for identification and prevention of cyber intrusions and describes 7 stages of a cyber attack
reconnaissance (and precursors)
weaponization
delivery
exploitation
installation 
command and control
actions on objectives

https://cyberforensicator.com/2019/03/24/incident-forensics-lifecycle/