Monday, June 3, 2019

Microservices Security

  • Microservices Security – OAuth2 and OpenID Connect
Microservices Security
These services and web applications should be stateless and decoupled so that can be deployed, scaled in the cloud easily.
Token based security is heart of Microservices Security and it used to secure Microservices services instead of traditional cookie (form) based security

What is Cookie based security?
It is stateful, server keep track of sessions and client (browser) holds session identifier in the cookie.
It is implicit, browser send it automatically with every request to the domain (and due to this CSRF attack is possible). There is a major problem with Cookies, they doesn’t work from domains they’re not valid for so client system can’t access cross domain resources/APIs.


What is Token based security (using JWT)?
It is stateless, server does not keep track of logged-in user’s sessions. Authorization server (STS) issue a token to the client system and client send token in the request header explicitly, server uses it to verify the authenticity and access of the request (and due to this CSRF attack is not possible). Client system can send token to any domain to access the cross domain APIs (if CORS is enabled for the APIs).

Why token based security?
It is stateless, scalable and decoupled, JWT token contains all the information to check its validity and user’s identity and access details. Client system can access cross domain APIs using token, it is not possible with cookie without SSO.
Cookie is not supported by every client system like native mobile apps but token based security can be easily implemented for Native mobile apps and Internet of Things.

What is JWT (JSON Web Tokens)?
JWT tokens are JSON encoded data structures contain information about issuer, subject (claims), expiration time, access token, refresh token, id token etc. It is signed for tamper proof and authenticity and it can be encrypted to protect the token information using symmetric or asymmetric approach

What is OAuth2?
OAuth2 is an authorization protocol, it solves a problem that user wants to access the data using client software like browse based web apps, native mobile apps or desktop apps.

OpenID Connect –
OpenID Connect builds on top of OAuth2 and add authentication. OpenID Connect add some constraint to OAuth2 like UserInfo Endpoint, ID Token, discovery and dynamic registration of OpenID Connect providers and session management. JWT is the mandatory format for the token.


API Gateway: Microservices Security–
API Gateway acts as a single entry point for all types of clients apps like public java script client app, traditional web app, native mobile app and third party client apps in the Microservice architecture
The API Gateway authenticates the user using cookie or token and passes the JWT token to the MicroServices. MicroServices uses this JWT token to identifies the user and validate the access (request authorization using access token). A MicroServices can also include this JWT token in requests header it makes to other services.
http://proficientblog.com/microservices-security/


  • API Security: Deep Dive into OAuth and OpenID Connect

OAuth 2 and OpenID Connect are fundamental to securing your APIs.
You need to take additional measures to protect your servers and the mobiles that run your apps in addition to the steps taken to secure your API
Without a holistic approach, your API may be incredibly secure, your OAuth server locked down, and your OpenID Connect Provider tucked away in a safe enclave. Your firewalls, network, cloud infrastructure, or the mobile platform may open you up to attack if you don’t also strive to make them as secure as your API.

Overview of OAuth
OAuth is a sort of “protocol of protocols” or “meta protocol,” meaning that it provides a useful starting point for other protocols (e.g., OpenID Connect, NAPS, and UMA). This is similar to the way WS-Trust was used as the basis for WS-Federation, WS-SecureConversation, etc., if you have that frame of reference.

Beginning with OAuth is important because it solves a number of important needs that most API providers have, including:

    Delegated access
    Reduction of password sharing between users and third-parties (the so called “password anti-pattern”)
    Revocation of access
With OAuth, users can revoke access to specific applications without breaking other apps that should be allowed to continue to act on their behalf.

Actors in OAuth

There are four primary actors in OAuth:

    Resource Owner (RO): The entity that is in control of the data exposed by the API, typically an end user
    Client: The mobile app, web site, etc. that wants to access data on behalf of the Resource Owner
    Authorization Server (AS): The Security Token Service (STS) or, colloquially, the OAuth server that issues tokens
    Resource Server (RS): The service that exposes the data, i.e., the AP
Scopes
OAuth defines something called “scopes.”
the client may request certain rights, but the user may only grant some of them or allow others that aren’t even requested.

Kinds of Tokens
In OAuth, there are two kinds of tokens:
    Access Tokens: These are tokens that are presented to the API
    Refresh Tokens: These are used by the client to get a new access token from the AS

Think of access tokens like a session that is created for you when you login into a web site
As long as that session is valid, you can continue to interact with the web site without having to login again.
Once that session is expired, you can get a new one by logging in again with your password.
Refresh tokens are like passwords in this comparison. Also, just like passwords, the client needs to keep refresh tokens safe.
It should persist these in a secure credential store. Loss of these tokens will require the revocation of all consents that users have performed.

Passing Tokens
There are two distinct ways in which they are passed:

    By value
    By reference
The run-time will either copy the data onto the stack as it invokes the function being called (by value) or it will push a pointer to the data (by reference). In a similar way, tokens will either contain all the identity data in them as they are passed around or they will be a reference to that data.

Profiles of Tokens

There are different profiles of tokens as well. The two that you need to be aware of are these:

    Bearer tokens
    Holder of Key (HoK) tokens

If you find a dollar bill on the ground and present it at a shop, the merchant will happily accept it. She looks at the issuer of the bill, and trusts that authority.The API gets the bearer token and accepts the contents of the token because it trusts the issuer (the OAuth server). The API does not know if the client presenting the token really is the one who originally obtained it.

Where some sort of proof that the client is the one to who the token was issued for, HoK tokens should be used.HoK tokens are like a credit card. If you find my credit card on the street and try to use it at a shop, the merchant will (hopefully) ask for some form of ID or a PIN that unlocks the card.


Types of Tokens
The OAuth specification doesn’t stipulate any particular type of tokens.
Example types include:
    WS-Security tokens, especially SAML tokens
    JWT tokens (which I’ll get to next)
    Legacy tokens (e.g., those issued by a Web Access Management system)
    Custom tokens

Custom tokens are the most prevalent when passing them around by reference
In this case, they are randomly generated strings. When passing by val, you’ll typically be using JWTs.

JSON Web Tokens
JSON Web Tokens or JWTs (pronounced like the English word “jot”) are a type of token that is a JSON data structure that contains information, including:

    The issuer
    The subject or authenticated uses (typically the Resource Owner)
    How the user authenticated and when
    Who the token is intended for (i.e., the audience)
 
These tokens are very flexible, allowing you to add your own claims (i.e., attributes or name/value pairs) that represent the subject. JWTs were designed to be light-weight and to be snuggly passed around in HTTP headers and query strings. To this end, the JSON is split into different parts (header, body, signature) and base-64 encoded.

If it helps, you can compare JWTs to SAML tokens. They are less expressive, however, and you cannot do everything that you can do with SAML tokens. Also, unlike SAML they do not use XML, XML name spaces, or XML Schema. This is a good thing as JSON imposes a much lower technical barrier on the processors of these types of tokens.

OAuth Flow

OAuth defines different “flows” or message exchange patterns. These interaction types include:

    The code flow (or web server flow)
    Client credential flow
    Resource owner credential flow
    Implicit flow
 
 
The code flow is by far the most common
It’s where the client is (typically) a web server, and that web site wants to access an API on behalf of a user
You’ve probably used it as a Resource Owner many times, for example, when you login to a site using certain social network identities. Even when the social network isn’t using OAuth 2 per se, the user experience is the same.

Improper and Proper Uses of OAuth
OAuth is not used for authorization.
OAuth is also not for authentication.
OAuth is also not for federation.
It’s for delegation, and delegation only

To see how this nuance makes a very big difference, imagine you’re a business owner. Suppose you hire an assistant to help you manage the finances. You consent to this assistant withdrawing money from the business’ bank account. Imagine further that the assistant goes down to the bank to use these newly delegated rights to extract some of the company’s capital. The banker would refuse the transaction because the assistant is not authorized — certain paperwork hasn’t been filed, for example. So, your act of delegating your rights to the assistant doesn’t mean squat. It’s up to the banker to decide if the assistant gets to pull money out or not. In case it’s not clear, in this analogy, the business owner is the Resource Owner, the assistant is the client, and the banker is the API.

Building OpenID Connect Atop OAuth
OpenID Connect (which is often abbreviated OIDC) was made with mobile in mind. For the new kind of tokens that it defines, the spec says that they must be JWTs, which were also designed for low-bandwidth scenarios.
By building on OAuth, you will gain both delegated access and federation capabilities with (typically) one product.

OpenID Connect is a modern federation specification. It is a passive profile, meaning it is bound to a passive user agent that does not take an active part in the message exchange (though the client does). This exchange flows over HTTP, and is analogous to the SAML artifact flow

OpenID Connect is a replacement for SAML and WS-Federation.
OpenID Connect defines a new kind of token: ID tokens. These are intended for the client. Unlike access tokens and refresh tokens that are opaque to the client, ID tokens allow the client to know, among other things:

    How the user authenticated (i.e., what type of credential was used)
    When the user authenticated
    Various properties about the authenticated user (e.g., first name, last name, shoe size, etc.)
 
This is useful when your client needs a bit of info to customize the user experience. Many times I’ve seen people use by value access tokens that contain this info, and they let the client take the values out of the API’s token

The User Info Endpoint and OpenID Connect Scopes
The spec defines a few specific scopes that the client can pass to the OpenID Connect Provider or OP (which is another name for an AS that supports OIDC):

    openid (required)
    profile
    email
    address
    phone

The first is required and switches the OAuth server into OpenID Connect mode.
The others are used to inform the user about what type of data the OP will release to the client. If the user authorizes the client to access these scopes, the OpenID Connect provider will release the respective data (e.g., email) to the client when the client calls the user info endpoint

Not Backward Compatible with v. 2
It’s important to be aware that OpenID Connect is not backward compatible with OpenID 2 (or 1 for that matter)
OpenID Connect is effectively version 3 of the OpenID specification.
https://nordicapis.com/api-security-oauth-openid-connect-depth/
  • API Security for Distributed Authorization Realms

A centralized API gateway with distributed authorization realms
Centralized OAuth2 authorization server with a distributed authentication providers using federation
Centralized API gateway  with centralized OAuth2 authorization servers  using assertion grant flow


In the most basic API security use case pattern, we have one API gateway, one authorization realm, and one or more resource servers. An authorization realm is where the resource owner will authenticate and authorize the OAuth2 client to access resources on his/her behalf.
https://wso2.com/library/articles/2018/02/api-security-for-distributed-authorization-realms/

SAML2 vs JWT: Understanding OAuth2

  • XACML (Extensible Access Control Markup Language) is an open standard XML-based language designed to express security policies and access rights to information for Web services, digital rights management (DRM), and enterprise security applications

https://searchcio.techtarget.com/definition/XACML

  • XACML stands for "eXtensible Access Control Markup Language". The standard defines a declarative fine-grained, attribute-based access control policy language, an architecture, and a processing model describing how to evaluate access requests according to the rules defined in policies

https://en.wikipedia.org/wiki/XACML
Authorization for APIs with XACML and OAuth 2.0
XACML PEP authorization
100% Pure XACML

  • WS-Trust is a WS-* specification and OASIS standard that provides extensions to WS-Security, specifically dealing with the issuing, renewing, and validating of security tokens, as well as with ways to establish, assess, and broker trust relationships between participants in a secure message exchange.

https://en.wikipedia.org › wiki › WS-Trust
Building an Ecosystem for API Security
Access Control Service Oriented Architecture Security
API Security for Modern Web Apps
Securing Microservices: The API gateway, authentication and authorization

  • Hash-based Message Authentication Code (HMAC)

Hash-based Message Authentication Code (HMAC) is a message authentication code that uses a cryptographic key in conjunction with a hash function.
Hash-based message authentication code (HMAC) provides the server and the client each with a private key that is known only to that specific server and that specific client. The client creates a unique HMAC, or hash, per request to the server by hashing the request data  with the private keys and sending it as part of a request. What makes HMAC more secure than Message Authentication Code (MAC) is that the key and the message are hashed in separate steps.
Once the server receives the request and regenerates its own unique HMAC, it compares the two HMACs. If they're equal, the client is trusted and the request is executed. This process is often called a secret handshake.
https://searchsecurity.techtarget.com/definition/Hash-based-Message-Authentication-Code-HMAC

  • On a final note, IPsec authentication for both AH and ESP uses a scheme called HMAC, a keyed-hashing message authentication code described in FIPS 198 and RFC 2104. HMAC uses a shared secret key between two parties rather than public key methods for message authentication. The generic HMAC procedure can be used with just about any hash algorithm, although IPsec specifies support for at least MD5 and SHA-1 because of their widespread use.

In HMAC, both parties share a secret key. The secret key will be employed with the hash algorithm in a way that provides mutual authentication without transmitting the key on the line. IPsec key management procedures will be used to manage key exchange between the two parties.

Recall that hash functions operate on a fixed-size block of input at one time; MD5 and SHA-1, for example, work on 64 byte blocks. These functions then generate a fixed-size hash value; MD5 and SHA-1, in particular, produce 16 byte (128 bit) and 20 byte (160 bit) output strings, respectively. For use with HMAC, the secret key (K) should be at least as long as the hash output.
The following steps provide a simplified, although reasonably accurate, description of how the HMAC scheme would work with a particular plaintext MESSAGE (Figure 16):

Alice pads K so that it is as long as an input block; call this padded key Kp. Alice computes the hash of the padded key followed by the message, i.e., HASH (Kp:MESSAGE).
Alice transmits MESSAGE and the hash value.
Bob has also padded K to create Kp. He computes HASH (Kp:MESSAGE) on the incoming message.
Bob compares the computed hash value with the received hash value. If they match, then the sender — Alice — must know the secret key and her identity is, thus, authenticated.
https://www.garykessler.net/library/crypto.html
  • HMAC Authentication – An Overview
HMAC authentication provides a simple way to authenticate a HTTP request using a secret key that is known to client and server. Both client and server have access to secret key. Generally, this secret key is a Unique Id which is created at the time of registration and stored in the database. Using the secret key and a message based on request content client generates a signature (MAC) using HMAC algorithm and this signature is attached to the authorization header of HTTP request. When the server receives the request, it extracts the hashed signature (MAC) from the request header and calculates its own version of signature to verify if the received signature matches the calculated signature. If the two signature matches, then the system concludes that the request is valid and should be served. If the two signatures don’t match, then the request is dropped and the system responds with an error message.
https://www.codeproject.com/Articles/766171/ApiFrame-A-simple-library-for-Web-API-security-exc

  • JSON Web Tokens are an open, industry standard RFC 7519 method for representing claims securely between two parties.

JWT.IO allows you to decode, verify and generate JWT.
https://jwt.io/

  • How JSON Web Token (JWT) Secures Your API

You've probably heard that JSON Web Token (JWT) is the current state-of-the-art technology for securing APIs.

API Authentication
The difficulty in securing an HTTP API is that requests are stateless — the API has no way of knowing whether any two requests were from the same user or not.

JSON Web Token
What we need is a way to allow a user to supply their credentials just once, but then be identified in another way by the server in subsequent requests.
Several systems have been designed for doing this, and the current state-of-the-art standard is JSON Web Token.

Structure of the Token
Normally, a JSON web token is sent via the header of HTTP requests.
Firstly, the token consists of three different strings, separated by a period. These three string are base 64 encoded and correspond to the header, the payload, and the signature.

We can decode these strings to get a better understand of the structure of JWT.
Header
The header is meta information about the token.
Payload
The payload can include any data you like, but you might just include a user ID if the purpose of your token is API access authentication.
It's important to note that the payload is not secure.
Anyone can decode the token and see exactly what's in the payload.
For that reason, we usually include an ID rather than sensitive identifying information like the user's email.
Even though this payload is all that's needed to identify a user on an API, it doesn't provide a means of authentication.

So this brings us to the signature, which is the key piece for authenticating the token.

Hashing Algorithms
To begin with, it's a function for transforming a string into a new string called a hash.
The most important property of the hash is that you can't use the hashing algorithm to identify the original string by looking at the hash.
There are many different types of hashing algorithms, but SHA256 is commonly used with JWT.

JWT Signature
So, coming back to the JWT structure, let's now look at the third piece of the token, the signature.
Firstly, HMACSHA256 is the name of a hashing function and takes two arguments: the string to hash, and the "secret" (defined below).
Secondly, the string we hash is the base 64 encoded header, plus the base 64 encoded payload.
Thirdly, the secret is an arbitrary piece of data that only the server knows.

Q. Why include the header and payload in the signature hash?
This ensures the signature is unique to this particular token.

Q. What's the secret?
We said before that you can't determine a hash's input from looking at the output
However, since we know that the signature includes the header and payload, as those are public information, if you know the hashing algorithm (hint: it's usually specified in the header), you could generate the same hash.
Including it in the hash prevents someone from generating their own hash to forge the token. And since the hash obscures the information used to create it, no one can figure out the secret from the hash, either.
The process of adding private data to a hash is called salting and makes cracking the token almost impossible.

Authentication Process
Login
A token is generated when a user logs in and is stored in the database with the user model.
The token then gets attached as the authorization header in the response to the login request.

Authenticating Requests
When the server receives a request with an authorization token attached, the following happens:

It decodes the token and extracts the ID from the payload.
It looks up the user in the database with this ID.
It compares the request token with the one that's stored with the user's model. If they match, the user is authenticated.

Logging Out
If the user logs out, simply delete the token attached to the user model, and now the token will no longer work. A user will need to log in again to generate a new token.

https://dzone.com/articles/how-json-web-token-jwt-secures-your-api
  • Using JWT (JSON Web Tokens) to authorize users and protect API routes


JSON Web Token (JWT) is an open standard (RFC 7519) that defines a compact and self-contained way for securely transmitting information between parties as a JSON object. This information can be verified and trusted because it is digitally signed. JWTs can be signed using a secret (with the HMAC algorithm) or a public/private key pair using RSA or ECDSA.
Also note, that if you want to follow along completely, I will be using POSTtman to access the API routes.

https://medium.com/@maison.moa/using-jwt-json-web-tokens-to-authorize-users-and-protect-api-routes-3e04a1453c3e


  • Securing your APIs with OpenID Connect


OpenID Connect (OIDC) is built on top of the OAuth 2.0 protocol and focuses on identity assertion.
OIDC provides a flexible framework for identity providers to validate and assert user identities for Single Sign-On (SSO) to web, mobile, and API workloads.

OIDC uses the same grant types as OAuth (implicit, password, application and access code) but uses OIDC specific scopes, such as openid, with optional scopes to obtain the identity, such as email and profile. OIDC generates a JSON Web Token (JWT), rather than an opaque token with OAuth, that can optionally be signed and encrypted.

https://www.ibm.com/support/knowledgecenter/en/SSMNED_5.0.0/com.ibm.apic.toolkit.doc/tapic_sec_api_config_oidc.html

Building an App Using Amazon Cognito and an OpenID Connect Identity Provider
Netscaler OpenID Connect federated authentication with Google
Configuring NGINX for OAuth/OpenID Connect SSO with Keycloak/Red Hat SSO
  • Securing REST API using Keycloak and Spring Oauth2


Keycloak is Open Source Identity and Access Management Server, which is a OAuth2 and OpenID Connect(OIDC) protocol complaint
how Spring Boot REST APIs can be secured with Keycloak using Spring OAuth2 library.

Keycloak documentation suggest 3 ways to secure Spring based REST APIS.

Using Keycloak Spring Boot Adapter
Using keycloak Spring Security Adapter
Using OpenID Connect (OIDC)+ OAuth2
https://medium.com/@bcarunmail/securing-rest-api-using-keycloak-and-spring-oauth2-6ddf3a1efcc2



  • Securing a Cluster


Controlling access to the Kubernetes API

Use Transport Level Security (TLS) for all API traffic
Kubernetes expects that all API communication in the cluster is encrypted by default with TLS, and the majority of installation methods will allow the necessary certificates to be created and distributed to the cluster component

API Authentication
Choose an authentication mechanism for the API servers to use that matches the common access patterns when you install a cluster.
Larger clusters may wish to integrate an existing OIDC or LDAP server that allow users to be subdivided into groups
All API clients must be authenticated, even those that are part of the infrastructure like nodes, proxies, the scheduler, and volume plugins. These clients are typically service accounts or use x509 client certificates, and they are created automatically at cluster startup or are setup as part of the cluster installation.

API Authorization
Once authenticated, every API call is also expected to pass an authorization check. Kubernetes ships an integrated Role-Based Access Control (RBAC) component that matches an incoming user or group to a set of permissions bundled into roles.
These permissions combine verbs (get, create, delete) with resources (pods, services, nodes) and can be namespace or cluster scoped.

Controlling access to the Kubelet

Kubelets expose HTTPS endpoints which grant powerful control over the node and containers. By default Kubelets allow unauthenticated access to this API.
Production clusters should enable Kubelet authentication and authorization

Controlling the capabilities of a workload or user at runtime

Limiting resource usage on a cluster
Resource quota limits the number or capacity of resources granted to a namespace
Limit ranges restrict the maximum or minimum size of some of the resources

Controlling what privileges containers run with
Pod security policies can limit which users or service accounts can provide dangerous security context settings

Restricting network access
The network policies for a namespace allows application authors to restrict which pods in other namespaces may access pods and ports within their namespaces
When running Kubernetes on a cloud platform limit permissions given to instance credentials, use network policies to restrict pod access to the metadata API, and avoid using provisioning data to deliver secrets
By default, there are no restrictions on which nodes may run a pod


Protecting cluster components from compromise

Restrict access to etcd
Administrators should always use strong credentials from the API servers to their etcd server, such as mutual auth via TLS client certificates, and it is often recommended to isolate the etcd servers behind a firewall that only the API servers may access

Enable audit logging
It is recommended to enable audit logging and archive the audit file on a secure server

Restrict access to alpha or beta features
When in doubt, disable features you do not use

Rotate infrastructure credentials frequently
The shorter the lifetime of a secret or credential the harder it is for an attacker to make use of that credential.
Set short lifetimes on certificates and automate their rotation
Use an authentication provider that can control how long issued tokens are available and use short lifetimes where possible.
If you use service account tokens in external integrations, plan to rotate those tokens frequently.

Review third party integrations before enabling them

Encrypt secrets at rest
In general, the etcd database will contain any information accessible via the Kubernetes API and may grant an attacker significant visibility into the state of your cluster. Always encrypt your backups using a well reviewed backup and encryption solution, and consider using full disk encryption where possible.

https://kubernetes.io/docs/tasks/administer-cluster/securing-a-cluster/



  • 9 Kubernetes Security Best Practices Everyone Must Follow


Upgrade to the Latest Version

Enable Role-Based Access Control (RBAC)
RBAC is usually enabled by default in Kubernetes 1.6 and beyond
Cluster-wide permissions should generally be avoided in favor of namespace-specific permissions. Avoid giving anyone cluster admin privileges, even for debugging — it is much more secure to grant access only as needed on a case-by-case basis.


Use Namespaces to Establish Security Boundaries
Is your team using namespaces effectively? Find out now by checking for any non-default namespaces

Separate Sensitive Workloads
For example, a compromised node’s kubelet credentials can usually access the contents of secrets only if they are mounted into pods scheduled on that node — if important secrets are scheduled onto many nodes throughout the cluster, an adversary will have more opportunities to steal them.
You can achieve this separation using node pools (in the cloud or on-premises) and Kubernetes namespaces, taints, tolerations, and other controls.

Secure Cloud Metadata Access
GKE’s metadata concealment feature changes the cluster deployment mechanism to avoid this exposure, and we recommend using it until it is replaced with a permanent solution.

Create and Define Cluster Network Policies
Network Policies allow you to control network access into and out of your containerized applications

Run a Cluster-wide Pod Security Policy
As a start, you could require that deployments drop the NET_RAW capability to defeat certain classes of network spoofing attacks.

Harden Node Security
three steps to improve the security posture on your nodes
Ensure the host is secure and configured correctly.
One way to do so is to check your configuration against CIS Benchmarks
Control network access to sensitive ports. Make sure that your network blocks access to ports used by kubelet, including 10250 and 10255.
Consider limiting access to the Kubernetes API server except from trusted networks
Malicious users have abused access to these ports to run cryptocurrency miners in clusters that are not configured to require authentication and authorization on the kubelet API server.
Minimize administrative access to Kubernetes nodes.


Turn on Audit Logging
Make sure you have audit logs enabled and are monitoring them for anomalous or unwanted API calls, especially any authorization failures — these log entries will have a status message “Forbidden.” Authorization failures could mean that an attacker is trying to abuse stolen credentials

https://www.cncf.io/blog/2019/01/14/9-kubernetes-security-best-practices-everyone-must-follow
Securing APIs with JWT, SCIM, OpenID and OAUTH
  • RBAC vs. ABAC: Definitions & When to Use

Someone logs into your computer system. What can that person do? If you use RBAC techniques, the answer to that question depends on that person's role.

The main difference between RBAC vs. ABAC is the way each method grants access. RBAC techniques allow you to grant access by roles. ABAC techniques let you determine access by user characteristics, object characteristics, action types, and more

What Is Role-Based Access Control?

A role in RBAC language typically refers to a group of people that share certain characteristics, such as:

    Departments
    Locations
    Seniority levels 
    Work duties


With a role defined, you can assign permissions. Those might involve:

    Access. What can the person see?
    Operations. What can the person read? What can the person write? Can the person create or delete files?
    Sessions. How long can the person stay in the system? When will the login work? When will the login expire?

the National Institute of Standards and Technology defines four subtypes of RBAC 
Flat: All employees have at least one role that defines permissions, but some have more than one.
Hierarchical: Seniority levels define how roles work together. Senior executives have their own permissions, but they also have those attained by their underlings.
Constrained: Separation of duties is added, and several people work on one task together. This helps to ensure security and prevent fraudulent activities.
Symmetrical: Role permissions are reviewed frequently, and permissions change as the result of that review.

These roles build upon one another, and they can be arranged by security level.

    Level 1, Flat: This is the least complex form of RBAC. Employees use roles to gain permissions.
    Level 2, Hierarchical: This builds on the Flat RBAC rules, and it adds role hierarchy.
    Level 3, Constrained: This builds on Hierarchical RBAC, and it adds separation of duties.
    Level 4, Symmetrical: This builds on the Constrained RBAC model, and it adds permission reviews. 

What Is Attribute-Based Access Control? 
Someone logs into your computer system. What can that person do? ABAC protocols answer that question via the user, the resource attributes, or the environment

As the administrator of a system using ABAC, you can set permissions by:

    User. A person's job title, typical tasks, or seniority level could determine the work that can be done.
    Resource attributes. The type of file, the person who made it, or the document's sensitivity could determine access.
    Environment. Where the person is accessing the file, the time of day, or the calendar date could all determine access.

In ABAC, elements work together in a coordinated fashion.

    Subjects: Who is trying to do the work? 
    Objects: What file within the network is the user trying to work with?
    Operation. What is the person trying to do with said file?

Relationships are defined by if/then statements. For example:

    If the user is in accounting, then the person may access accounting files. 
    If the person is a manager, then that person may read/write files. 
    If the company policy specifies “no Saturday work” and today is Saturday, then no one may access any files today.


RBAC vs. ABAC: Pros & Cons

ABAC Cons
Time constraints
Defining variables and configuring your rules is a massive effort, especially at project kickoff. 

RBAC Con
Role explosions
To add granularity to their systems, some administrators add more roles. That can lead to what researchers call "role explosions" with hundreds or even thousands of rules to manage. 

5 Identity Management Scenarios to Study 

1. Small workgroups. RBAC is best. Defining work by role is simple when the company is small and the files are few.
If you work within a construction company with just 15 employees, an RBAC system should be efficient and easy to set up.

2. Geographically diverse workgroups. ABAC is a good choice. You can define access by employee type, location, and business hours. You could only allow access during business hours for the specific time zone of a branch

3. Time-defined workgroups. ABAC is preferred. Some sensitive documents or systems shouldn't be accessible outside of office hours. An ABAC system allows for time-based rules.

4. Simply structured workgroups. RBAC is best. Your company is large, but access is defined by the jobs people do.
For example, a doctor's office would allow read/write scheduling access to receptionists, but those employees don't need to see medical test results or billing information. An RBAC system works well here.

5. Creative enterprises. ABAC is ideal because creative companies often use their files in unique ways. Sometimes, everyone needs to see certain documents; other times, only a few people do. Access needs change by the document, not by the roles.
The complexity of who should see these documents, and how they are handled, is best accomplished with ABAC.

Many times neither RBAC or ABAC will be the perfect solution to cover all the use cases you need. That’s why most organizations use a hybrid system, where high-level access is accomplished through RBAC and then fine-grained controls within that structure are accomplished through ABAC.
For example, you might use the RBAC system to hide sensitive servers from new employees. Then, you might use ABAC systems to control how people alter those documents once they do have access.
RBAC offers leak-tight protection of sensitive files, while ABAC allows for dynamic behavior. Blending them combines the strengths of both. 



https://www.okta.com/identity-101/role-based-access-control-vs-attribute-based-access-control/
  • RBAC vs. ABAC: What’s the Difference?

In any company, network users must be both authenticated and authorized before they can access parts of the system
The process of gaining authorization is called access control.

Authentication and Authorization

The two fundamental aspects of security are authentication and authorization. After you enter your credentials to log in to your computer or sign in to an app or software, the device or application undertakes authentication to determine your level of authorization. Authorization may include what accounts you can use, what resources you have access to, and what functions you are permitted to carry out.

Role-Based Access Control (RBAC) vs. Attribute-Based Access Control (ABAC)
The primary difference between RBAC and ABAC is RBAC provides access to resources or information based on user roles, while ABAC provides access rights based on user, environment, or resource attributes.
Essentially, when considering RBAC vs. ABAC, RBAC controls broad access across an organization, while ABAC takes a fine-grain approach.

RBAC is role-based, so depending on your role in the organization, you will have different access permissions. This is determined by an administrator, who sets the parameters of what access a role should have, along with which users are assigned which roles. For instance, some users may be assigned to a role where they can write and edit particular files, whereas other users may be in a role restricted to reading files but not editing them.

An organization might use RBAC for projects like this because with RBAC, the policies don’t need to be changed every time a person leaves the organization or changes jobs: they can simply be removed from the role group or allocated to a new role. This also means new employees can be granted access relatively quickly, depending on the organizational role they fulfill.

Essentially, ABAC has a much greater number of possible control variables than RBAC. 
For example, instead of people in the HR role always being able to access employee and payroll information, ABAC can place further limits on their access, such as only allowing it during certain times or for certain branch offices relevant to the employee in question.
This can reduce security issues and can also help with auditing processes later.

RBAC vs. ABAC

Generally, if RBAC will suffice, you should use it before setting up ABAC access control. Both these access control processes are filters with ABAC being the more complex of the two, requiring more processing power and time.

In many cases, RBAC and ABAC can be used together hierarchically, with broad access enforced by RBAC protocols and more complex access managed by ABAC.
This means the system would first use RBAC to determine who has access to a resource, followed by ABAC to determine what they can do with the resource and when they can access it.

https://www.dnsstuff.com/rbac-vs-abac-access-control














  • What’s Access Control?

Think of a company’s network and resources as a secure building. The only entry point is protected by a security guard, who verifies the identity of anyone and everyone entering the building. If someone fails to prove their identity, or if they don’t have the necessary rights to enter the building, they are sent away. In this analogy, the security guard is like an access control mechanism, which lays the foundation of a company’s security infrastructure.

What is Role-Based Access Control (RBAC)?

In an RBAC system, people are assigned privileges and permissions based on their “roles.” These roles are defined by an administrator who categorizes people based on their departments, responsibilities, seniority levels, and/or geographical locations.

For example, a chief technology officer may have exclusive access to all the company’s servers. A software engineer may only have access to a small subset of application servers.

E.g., a third-party contractor is assigned the outsider role, which grants them access to a server for x hours. On the other hand, an internal software developer may be allowed indefinite access to the same server.

It’s also possible for one user to be assigned multiple roles. For example, a software architect oversees different teams that are building different projects. They need access to all the files related to all these projects. To this end, the administrator assigns them multiple roles with each giving them access to files from a particular project.

Types of RBAC
The NIST model for role-based access control defines the following RBAC categories:

Flat RBAC: Each employee is assigned at least one role, but some can have more than one. If someone wants access to a new file/resource/server, they need to first obtain a new role.
Hierarchical RBAC: Roles are defined based on seniority levels. In addition to their own privileges, senior employees also possess those of their subordinates.
Constrained RBAC: This model introduces separation of duties (SOD). SOD spreads the authority of performing a task, across multiple users, reducing the risk of fraudulent and/or risky activities. E.g., if a developer wants to decommission a server, they need approval from not only their direct manager, but also the head of infrastructure. This gives the infrastructure head a change to deny risky and/or unnecessary requests.
Symmetric RBAC: All organizational roles are reviewed regularly. As a result of these reviews, privileges may get assigned or revoked, and roles may get added or removed.

In an ABAC environment, when a user logs in, the system grants or rejects access based on different attributes.

User. In ABAC terms, the requesting user is also known as the subject. User attributes can include designation, usual responsibilities, security clearance, department, and/or seniority levels.
For example, let’s say Bob, a payroll analyst, tries to access the HR portal. The system checks their “department,” “designation,” and “responsibilities” attributes to determine that they should be allowed access


Accessed resource. This can include name and type of the resource (which can be a file, server, or application), its creator and owner, and level of sensitivity.
For example, Alice tries to access a shared file which contains the best practices for software development. Since the “sensitivity level” attribute for the file is low, Alice is allowed access to it, even though she doesn’t own it. However, if she tries to access a file from a project she doesn’t work on, the “file owner” and “sensitivity level” attributes will prevent her from doing so.

Action. What is the user trying to do with the resource? Relevant attributes can include “write,” “read,” “copy,” “delete,” “update,” or “all.” 
For example, if Alice only has the “read” attribute set in her profile, for a particular file, she will not be allowed to update the source code written in that file. However, someone with the “all” attribute set can do whatever they want.

Environment. Some of the considered attributes are time of day, the location of the user and the resource, the user device and the device hosting the file.
For example, Alice may be allowed to access a file in a “local” environment, but not when it’s hosted in a “client” environment.

RBAC vs. ABAC: Pros and Cons

RBAC CONS
To establish granular policies, administrators need to keep adding more roles. This can very easily lead to “role explosion,” which requires administrators to manage thousands of organizational roles.

In the event of a role explosion, translating user requirements to roles can be a complicated task.

ABAC CONS
Can be hard to implement, especially in time-constrained situations
Recovering from a bad ABAC implementation can be difficult and time-consuming.
Implementing ABAC often requires more time, resources, and expensive tooling, which add to the overall cost. However, a successful ABAC implementation can be a future-proof, financially viable investment.


When to use RBAC or ABAC?
ABAC is widely considered an evolved form of RBAC

Choose ABAC if you:
have the time, resources, and budget for a proper ABAC implementation
in a large organization, which is constantly growing. ABAC enables scalability
Have a workforce that is geographically distributed. ABAC can help you add attributes based on location and time-zone.
Want as granular and flexible an access control policy as possible
Want to future-proof your access control policy. The world is evolving, and RBAC is slowly becoming a dated approach. ABAC gives you more control and flexibility over your security controls.

Choose RBAC if you:
Are in a small-to-medium sized organization
Have well-defined groups within your organization, and applying wide, role-based policies makes sense.
Have limited time, resources, and/or budget to implement an access control policy.
Don’t have too many external contributors and don’t expect to onboard a lot of new people.

https://www.onelogin.com/learn/rbac-vs-abac
  • The System for Cross-domain Identity Management (SCIM)
The System for Cross-domain Identity Management (SCIM) specification is designed to make managing user identities in cloud-based applications and services easier. The specification suite seeks to build upon experience with existing schemas and deployments, placing specific emphasis on simplicity of development and integration, while applying existing authentication, authorization, and privacy models. Its intent is to reduce the cost and complexity of user management operations by providing a common user schema and extension model, as well as binding documents to provide patterns for exchanging this schema using standard protocols. In essence: make it fast, cheap, and easy to move users in to, out of, and around the cloud.
http://www.simplecloud.info/

SCIM, or System for Cross-domain Identity Management, is an open standard that allows for the automation of user provisioning.

Why use SCIM?
SCIM communicates user identity data between identity providers (such as companies with multiple individual users) and service providers requiring user identity information (such as enterprise SaaS apps).
With SCIM, user identities can be created either directly in a tool like Okta, or imported from external systems like HR software or Active Directory.
Employees outside of IT can take advantage of single sign-on (SSO) to streamline their own workflows and reduce the need to pester IT for password resets by up to 50%.

How it works
SCIM is a REST and JSON-based protocol that defines a client and server role. A client is usually an identity provider (IDP), like Okta, that contains a robust directory of user identities. A service provider (SP) is usually a SaaS app, like Box or Slack, that needs a subset of information from those identities. When changes to identities are made in the IdP, including create, update, and delete, they are automatically synced to the SP according to the SCIM protocol. The IdP can also read identities from the SP to add to its directory and to detect incorrect values in the SP that could create security vulnerabilities. For end users, this means that they have seamless access to applications for which they’re assigned, with up-to-date profiles and permissions.

https://www.okta.com/blog/2017/01/what-is-scim/
  • Security Assertion Markup Language (SAML)
Using Security Assertion Markup Language (SAML) web browser single sign-on (SSO), administrators can use an identity provider to manage the identities of their users and the applications they use.

Supported SAML services
We offer limited support for all identity providers that implement the SAML 2.0 standard. We officially support these identity providers that have been internally tested:

Active Directory Federation Services (AD FS)
Azure Active Directory (Azure AD)
Okta
OneLogin
PingOne
Shibboleth

https://help.github.com/en/articles/about-identity-and-access-management-with-saml-single-sign-on
  • Security Assertion Markup Language (SAML) is an open standard that allows identity providers (IdP) to pass authorization credentials to service providers (SP).
you can use one set of credentials to log into many different websites
SAML is the link between the authentication of a user’s identity and the authorization to use a service.
SAML enables Single-Sign On (SSO), a term that means users can log in once, and those same credentials can be reused to log into other service providers.

What is SAML Used For?
SAML simplifies federated authentication and authorization processes for users, Identity providers, and service providers.
SAML provides a solution to allow your identity provider and service providers to exist separately from each other, which centralizes user management and provides access to SaaS solutions.

SAML authentication is the process of verifying the user’s identity and credentials (password, two-factor authentication, etc.). SAML authorization tells the service provider what access to grant the authenticated user.

SAML vs. OAuth
OAuth is a slightly newer standard that was co-developed by Google and Twitter to enable streamlined internet logins. OAuth uses a similar methodology as SAML to share login information. SAML provides more control to enterprises to keep their SSO logins more secure, whereas OAuth is better on mobile and uses JSON.

Facebook and Google are two OAuth providers that you might use to log into other internet sites.

What is a SAML Assertion?
A SAML Assertion is the XML document that the identity provider sends to the service provider that contains the user authorization. There are three different types of SAML Assertions – authentication, attribute, and authorization decision.

Authentication assertions prove identification of the user and provide the time the user logged in and what method of authentication they used (I.e., Kerberos, 2 factor, etc.)
The attribution assertion passes the SAML attributes to the service provider – SAML attributes are specific pieces of data that provide information about the user.
An authorization decision assertion says if the user is authorized to use the service or if the identify provider denied their request due to a password failure or lack of rights to the service.

https://www.varonis.com/blog/what-is-saml/
SAML – A go-to tool for Enterprise – Cloud Applications Security
Enabling SAML 2.0 Federated Users to Access the AWS Management Console

  • OpenID Connect vs. SAML 2.0, vs. OAuth 2.0

The gradual integration of applications and services external to an organization’s domain motivated both the creation and adoption of federated identity services
Single sign-on (SSO), a forerunner to identity federation, was an effective solution which could manage a single set of user credentials for resources which existed within a single domain
Federated identities offered organizations the opportunity to preserve the benefits of SSO while extending the reach of a user’s credentials to include external resources which reduces costs and when implemented correctly, can increase security.

OpenID Connect
preferred 3rd party OpenID Connect identity provider (IDP) such as Google, Microsoft, and PayPal, to authenticate with numerous online services.
OpenID Connect has been built on top of the OAuth 2.0 protocol and employs REST/JSON for messaging.

For developers, OpenID allows developers to authenticate users without creating and maintaining a local authentication system. Instead, developers can rely on the expertise of an organization committed to the secure implementation of an identity protocol capable of ensuring identities they manage are authentic and to the best of their abilities, authentic.

The OpenID Connect specification defines three roles:

The end user or the entity that is looking to verify its identity
The relying party (RP), which is the entity looking to verify the identity of the end user
The OpenID Connect provider (OP), which is the entity that registers the OpenID URL and can verify the end user’s identity


SAML v2.0
Security Assertion Markup Language (SAML) is an XML-based open standard for exchanging authentication and authorization data between parties.

The SAML specification defines three roles,

The principal, which is typically the user looking to verify his or her identity
The identity provider (idP), which is the entity that is capable of verifying the identity of the end user
The service provider (SP), which is the entity looking to use the identity provider to verify the identity of the end user

OAuth 2.0
OAuth 2.0 is an open standard launched in 2006 focusing exclusively on authorization, differentiating itself from OpenID and SAML which were created for the purposes of authentication.

The OAuth 2.0 specifications define the following roles,

The end user or the entity that owns the resource in question
The resource server (OAuth Provider), which is the entity hosting the resource
The client (OAuth Consumer), which is the entity that is looking to consume the resource after getting authorization from the client

https://www.softwaresecured.com/differentiating-federated-identities-openid-connect-saml-v2-0-and-oauth-2-0/

Radiant Logic: What is a Federated Identity Service?
Logical layer that federateds identity resources acting as a hub for your identities regardless of where and how your identity is stored
Federated Identity Service
1- Authentication  and SSO
2- Authorization
SSO: Global view of all your identities without overlapping users
one user may have multiple identifiers or two different users may share same identifier
You send tokens not passwords across the firewall
easy migration from on-premises/cloud to  on-premises/cloud
easy implementation for company mergings


Identity, Authentication + OAuth = OpenID Connect

  • Demystifying OAuth 2.0 and OpenId Connect (and SAML)

OAuth 2.0 and OpenId Connect
OAuth 2.0 is a set of defined process flows for “delegated authorization”.
OpenId Connect is a set of defined process flows for “federated authentication”. OpenId Connect flows are built using the Oauth2.0 process flows as the base and then adding a few additional steps over it to allow for “federated authentication”.

Delegated Authorization
Let’s say Joe owns certain resources(eg. Joe’s contact list) that are hosted on some server (eg. google.contacts server). Now, Joe wants an application that he is using (eg. Yelp), to be able to access his resources (i.e. his contact list) that is on the google.contacts server, and import it into the Yelp App. Joe needs some mechanism by which he can “authorize”, the Yelp app to access his contacts on the google.contacts server.

OAuth 2.0 Terminology
In the above example Joe is considered the “Resource Owner”, since Joe owns the resource (Joe’s contact list). The server on which the resource resides(google.contacts server) is called the “Resource Server”. The Yelp App that is trying to access the resources on the resource server is called the “Client”.

Joe (resource owner) is “delegating” the responsibility to “authorize” access to his resources(joe’s contact list ) hosted on the “resource server” (contacts.google server), to the authorization server (accounts.google.com server)

Federated Authentication
Federated Authentication is the ability for you to login to an App (eg. Spotify or Yelp) using your facebook login
In this case Spotify or Yelp “federates” the ability to identify the user to facebook.
Federated Authentication allows you to login to a site using your facebook or google account.
Delegated Authorization is the ability of an external app to access resources. This use case would be for example Spotify trying to access your google contacts list to import it into Spotify.

OpenId Connect vs. SAML
There are two popular industry standards for Federated Authentication. SAML (or Security Assertion Markup Language) flow, and OpenId Connect
OpenId Connect is built on the process flows of OAuth 2.0 and typically uses JWT (JSON Web token) format for the id-token.
SAML flow is independent of OAuth 2.0, and relies on the exchange of messages for authentication in XML SAML format (instead of JWT format).
Both flows allow for SSO (Single Sign On), i.e. the ability to log into a website using your login credentials from a different site (eg. facebook login or google login).

OpenId connect is newer and built on the OAuth 2.0 process flow. It is tried and tested and typically used in consumer websites, web apps and mobile apps.
SAML is its older cousin, and typically used in enterprise settings eg. allowing single sign on to multiple applications within an enterprise using our Active Directory login.

https://hackernoon.com/demystifying-oauth-2-0-and-openid-connect-and-saml-12aa4cf9fdba
  • Multi-factor authentication (MFA) is a method of computer access control in which a user is granted access only after successfully presenting several separate pieces of evidence to an authentication mechanism – typically at least two of the following categories: knowledge (something they know), possession (something they have), and inherence (something they are)
https://en.wikipedia.org/wiki/Multi-factor_authentication

How To: Cisco & F5 Deployment Guide: ISE Load Balancing Using BIG-IP
Setting up Radius Server Wireless Authentication in Windows Server 2012 R2
Setup radius server 2008 r2 for wireless(WPA&WPA2-Enterprise)
WiFi with WSSO using Windows NPS and FortiGate Groups
RADIUS Bridge provides Two-Factor authentication with all OpenOTP One-Time Password methods
Integrate your existing NPS infrastructure with Azure Multi-Factor Authentication
Site-to-site IPsec VPN with overlapping subnets
Security Partner Ecosystem
Dynamic Security for AWS
  • Two-factor authentication (also known as 2FA) is a method of confirming a user's claimed identity by utilizing a combination of two different components. Two-factor authentication is a type of multi-factor authentication.
A good example from everyday life is the withdrawing of money from a cash machine; only the correct combination of a bank card (something that the user possesses) and a PIN (personal identification number, something that the user knows) allows the transaction to be carried out.
https://en.wikipedia.org/wiki/Multi-factor_authentication

  • All 2FA systems are based on two of three possible factors: a knowledge factor (something the user knows, like a password), a possession factor (something the user has, like a token; more on that below), and an inherence factor (something the user is, such as a fingerprint). In this scenario, even if a malicious party obtains a person's password, he or she would not be able to provide the relevant second element needed to complete the authentication process. This lowers risk and the potential for unscrupulous behavior, as a compromised password alone is not enough to compromise the authentication system.

In the enterprise, two-factor Web authentication systems rely on hardware-based security tokens that generate passcodes; these passcodes or PINs are valid for about 60 seconds and must be entered along with a password. In a consumer-oriented Web-based environment, it's cost-prohibitive for a service provider to distribute physical tokens to each and every individual use

https://searchsecurity.techtarget.com/tip/Intro-to-two-factor-authentication-in-Web-authentication-scenarios

  • Loosely defined, an authentication factor is a category of methods for verifying a user’s identity when requesting access to a system. Simply put, it makes sure that they are who they say they are.

But because usernames and passwords fall are under the same factor, when combined they form what is known as single-factor authentication (SFA). Overall, authentication factors can be divided into generally three categories: knowledge, possession, and inherence factors.
A knowledge authentication factor includes information only a user should know (i.e username, password)
A possession authentication factor includes credentials retrieved from a user’s physical possession, usually in the from a hardware device (i.e. security token, software token mobile phone used)
An inherence authentication factor includes a user’s identifiable biometric characteristic (i.e. fingerprint, voice, iris scan)
https://www.cloudbric.com/blog/2017/07/two-factor-vs-multi-factor-authentication/

  • Universal 2nd Factor (U2F) is an open authentication standard that strengthens and simplifies two-factor authentication (2FA) using specialized Universal Serial Bus (USB) or near-field communication (NFC) devices based on similar security technology found in smart cards

https://en.wikipedia.org/wiki/Universal_2nd_Factor

  • The use of 2FA was supposed to remedy that, but the approach turns out to have its own vulnerabilities, which hackers have exploited, and getting around those challenges can be expensive. Fortunately, there's another alternative to 2-factor authentication: Universal 2nd Factor, or U2F, is an open source approach that might be just what application developers need.


2FA implementations have their own limitations. For example, when your 2-factor authentication relies on a verification code sent through an SMS text message, crooks can break it by using social engineering or compromising the user's phone. Using smartphones as a second authentication factor also presents other risks, such as the need to protect the application logic from malware.

Another challenge for implementing two-factor authentication is the price tag. Companies such as Google or Facebook that have billions of users can't implement expensive or complex solutions, such as providing every user with a unique token or smart card for its service, because this would be too expensive.

The U2F open authentication standard, created by Google and Yubico, lets users securely and instantly access multiple online services using a single device, without requiring the use of special device drivers or client software.  With U2F, you can use 1 token for authenticating to many services. 
https://techbeacon.com/security/beyond-two-factor-how-use-u2f-improve-app-security
 
 
  • At Yubico, we are often asked why we are so dedicated to bringing the FIDO U2F open authentication standard  to life when our YubiKeys already support the OATH OTP standard.


As good as it is, traditional OTP has limitations.

    Users need  to type codes during their login process.
    Manufacturers often possess the seed value of the tokens.
    Administrative overhead resulting from having to set up and provision devices for users.
    The technology requires the storage of secrets on servers, providing a single point of attack.

Yubico’s OTP implementation solves some of those issues.

    The user never has to type a code instead he just touches a button.
    Enterprises can configure their own encryption secrets on a YubiKey, which means no one else ever sees those secrets.
    OTPs generated by a YubiKey are significantly longer than those requiring user input (32 characters vs 6 or 8 characters), which means a higher level of security.  
    YubiKeys allow enrollment by the user, which reduces administrative overhead.
    It is easy to implement with any existing website with no client software needed.
    For the OATH standard, Yubico uniquely offers a token prefix that can be used for identity, simplifying enrollment and user experience.

https://www.yubico.com/2016/02/otp-vs-u2f-strong-to-stronger/
  • OAuth allows notifying a resource provider (e.g. Facebook) that the resource owner (e.g. you) grants permission to a third-party (e.g. a Facebook Application) access to their information (e.g. the list of your friends).

Say you have an existing GMail account. You decide to join LinkedIn. Adding all of your many, many friends manually is tiresome and error-prone.

Without an API for exchanging this list of contacts, you would have to give LinkedIn the username and password to your GMail account, thereby giving them too much power.

This is where OAuth comes in. If your GMail supports the OAuth protocol, then LinkedIn can ask you to authorize them to access your GMail list of contacts.

I have supposed a scenario where a website(stackoverflow) needs to add login with facebook feature. Thus facebook is oAuth Provider and the stackoverflow is oAuth Client.
This is the best part user(owner) is not giving its fb credential to stackoverflow.
Only then facebook will give access token to stackoverflow. Then access token is used by stackoverflow to retrive owner's information without using password.
http://stackoverflow.com/questions/4201431/what-exactly-is-oauth-open-authorization


  • OAuth is an open standard for authorization, commonly used as a way for Internet users to authorize websites or applications to access their information on other websites but without giving them the passwords.This mechanism is used by companies such as Google, Facebook, Microsoft and Twitter to permit the users to share information about their accounts with third party applications or websites.
https://en.wikipedia.org/wiki/OAuth

  • OAuth 2.0 is the industry-standard protocol for authorization. OAuth 2.0 supersedes the work done on the original OAuth protocol created in 2006. OAuth 2.0 focuses on client developer simplicity while providing specific authorization flows for web applications, desktop applications, mobile phones, and living room devices.
https://oauth.net/2/

  • Microsoft Azure Multi-Factor Authentication (MFA) is an authentication service that requires users to verify their sign-in attempts by using a mobile app, phone call, or text message. It is available to use with Microsoft Azure Active Directory, and as a service for cloud and on-premises enterprise applications. For the PAM scenario, Azure MFA provides an additional authentication mechanism. Azure MFA can be used for authorization, regardless of how a user authenticated to the Windows PRIV domain.
https://docs.microsoft.com/en-us/microsoft-identity-manager/pam/use-azure-mfa-for-activation


  • What is Azure Multi-Factor Authentication?

Two-step verification is a method of authentication that requires more than one verification method and adds a critical second layer of security to user sign-ins and transactions. It works by requiring any two or more of the following verification methods:

    Something you know (typically a password)
    Something you have (a trusted device that is not easily duplicated, like a phone)
    Something you are (biometrics)

https://docs.microsoft.com/en-us/azure/multi-factor-authentication/multi-factor-authentication


How to Achieve Agile API Security

Four Pillars of API Security
Ensuring the Security of Your APIs
Building microservices, part 3. Secure API's with OAuth 2.0

API Security from the DevOps and CSO Perspectives (Webcast)
  • Authentication and Authorization: OpenID vs OAuth2 vs SAML 

what’s going on when you click that ubiquitous “Sign in with Google/Facebook” button.

Authorization & Authentication Basics
Our project, a single-page application, will be a public-facing website. 
We want to restrict access to registered users only.
we want to tailor each user’s experience, and the amount and type of data that they can view, to their individual roles and access levels.
Authentication means verifying that someone is indeed who they claim to be. 
Authorization means deciding which resources a certain user should be able to access, and what they should be allowed to do with those resources

With sites like Facebook or Google, a user can log in to one application with a set of credentials. This same set of credentials can then be used to log in to related websites or applications (like websites that ask you, “Sign up with Facebook or Google account?”).

a business may have an internal-facing employee portal with links to intranet sites regarding timesheets, health insurance, or company news. Rather than requiring an employee to log in at each website, a better solution would be to have the employee log in at a portal, and have that portal automatically authenticate the user with the other intranet sites. This idea, called single sign-on (SSO), allows a user to enter one username and password in order to access multiple applications.

the use of linked identities means they have to manage only one username and password for the related websites. The user experience is better for them, as they can avoid multiple logins.A user’s (single set of) credentials will be stored in one database, rather than multiple credentials stored across multiple databases
developers of the various applications don’t have to store passwords. Instead, they can accept proof of identity or authorization from a trusted source.

There are multiple solutions for implementing SSO. The three most common web security protocols (at the time of this writing) are OpenID, OAuth, and SAML

OpenID
OpenID is an open standard for authentication
A user must obtain an OpenID account through an OpenID identity provider (for example, Google). The user will then use that account to sign into any website (the relying party) that accepts OpenID authentication (think YouTube or another site that accepts a Google account as a login). 

OpenID is technically a URL that a user owns (e.g. alice2016.openid.com), so some websites offer the option to manually enter an OpenID.
The latest version of OpenID is OpenID Connect, which combines OpenID authentication and OAuth2 authorization

OAuth2
OAuth2 is an open standard for authorization.
OAuth2 is also the basis for OpenID Connect, which provides OpenID (authentication) on top of OAuth2 (authorization) for a more complete security solution.

an OAuth2 use case might look like this:  Alice signs up for a new account at NewApp and is offered the option to see which of her friends already use NewApp so she can connect with them. There’s a button labeled “import contacts from Facebook.” Alice clicks that button, and she is redirected to Facebook to log in. Alice successfully logs in and is asked if she wants to share her Facebook friend list with NewApp. She clicks yes, and is forwarded back to NewApp along with a token. NewApp now has permission (with the token) to access Alice’s friend list, without her sharing her credentials directly with NewApp. This eliminates the risk of NewApp logging into Facebook on Alice’s behalf and doing things she wouldn’t want (posting status updates, changing her password, etc.).

SAML
It’s an open standard that provides both authentication and authorization.

Security Risks
OAuth2
Phishing
OAuth 2.0 does not support signature, encryption, channel binding, or client verification.  Instead, it relies completely on TLS for confidentiality.
OpenId
Phishing
Identity providers have a log of OpenID logins, making a compromised account a bigger privacy breach
SAML
XML Signature Wrapping to impersonate any user

https://spin.atomicobject.com/2016/05/30/openid-oauth-saml/

  • microservices are the new application platform for cloud development. Microservices are deployed and managed independently, and once implemented inside containers they have very little interaction with the underlying OS.

What are the features of Microservices?

    Decoupling – Services within a system are largely decoupled. So the application as a whole can be easily built, altered, and scaled
    Componentization – Microservices are treated as independent components that can be easily replaced and upgraded
    Business Capabilities – Microservices are very simple and focus on a single capability
    Autonomy – Developers and teams can work independently of each other, thus increasing speed
    Continous Delivery – Allows frequent releases of software, through systematic automation of software creation, testing, and approval
    Responsibility – Microservices do not focus on applications as projects. Instead, they treat applications as products for which they are responsible
    Decentralized Governance – The focus is on using the right tool for the right job. That means there is no standardized pattern or any technology pattern. Developers have the freedom to choose the best useful tools to solve their problems
    Agility – Microservices support agile development. Any new feature can be quickly developed and discarded again
What are the best practices to design Microservices?

seperate datastore for each microservice
seperate build for each microservice 
treat servers as stateless
deploy into containers

How does Microservice Architecture work?

    Clients – Different users from various devices send requests.
    Identity Providers – Authenticates user or clients identities and issues security tokens.
    API Gateway – Handles client requests.
    Static Content – Houses all the content of the system.
    Management –  Balances services on nodes and identifies failures.
    Service Discovery – A guide to find the route of communication between microservices.
    Content Delivery Networks – Distributed network of proxy servers and their data centers.
    Remote Service – Enables the remote access information that resides on a network of IT devices.
What are the pros and cons of Microservice Architecture?

Cons of Microservice Architecture
Increases troubleshooting challenges
Increases delay due to remote calls
Increased efforts for configuration and other operations
Difficult to maintain transaction safety
Tough to track data across various boundaries
Difficult to code between services

What is the difference between Monolithic, SOA and Microservices Architecture?


    Monolithic Architecture is similar to a big container wherein all the software components of an application are assembled together and tightly packaged.
    A Service-Oriented Architecture is a collection of services which communicate with each other. The communication can involve either simple data passing or it could involve two or more services coordinating some activity.
    Microservice Architecture is an architectural style that structures an application as a collection of small autonomous services, modeled around a business domain.
What is REST/RESTful and what are its uses?

Microservices can be implemented with or without RESTful APIs, but it’s always easier to build loosely coupled microservices using RESTful APIs. 

What is OAuth?

OAuth stands for open authorization protocol. This allows accessing the resources of the resource owner by enabling the client applications on HTTP services such as third-party providers Facebook, GitHub, etc. So with this, you can share resources stored on one site with another site without using their credentials.

What is End to End Microservices Testing?
End-to-end testing validates each and every process in the workflow is functioning properly. This ensures that the system works together as a whole and satisfies all requirements.

What is the use of Container in Microservices?
You can encapsulate your microservice in a container image along with its dependencies, which then can be used to roll on-demand instances of microservice without any additional efforts required.

What is the role of Web, RESTful APIs in Microservices?
each microservice must have an interface. This makes the web API a very important enabler of microservices. Being based on the open networking principles of the Web, RESTful APIs provide the most logical model for building interfaces between the various components of a microservice architecture.

https://www.edureka.co/blog/interview-questions/microservices-interview-questions/ 


  • How to implement security for microservices

Microservices Security can be implemented either using OAuth2 or JWT.

    Develop a Spring Boot Application to expose a Simple REST GET API with mapping /hello.
    Configure Spring Security for JWT. Expose REST POST API with mapping /authenticate using which User will get a valid JSON Web Token. And then allow the user access to the api /hello only if it has a valid token

How to implement distributed logging for microservices

    Spring Cloud Sleuth is used to generate and attach the trace id, span id to the logs so that these can then be used by tools like Zipkin and ELK for storage and analysis
    Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in service architectures. Features include both the collection and lookup of this data.

What is Hashicorp Valut? How to use it with microservices

Microservices architecture have multiple services which interact with each other and external resources like databases. They also need access to usernames and passwords to access these resources. 
Usually these credentials are stored in config properties.So each microservice will have its own copy of credentials
If any credentials change we will need to update the configurations in all microservices

Hashicorp Vault is a platform to secure, store, and tightly control access to tokens, passwords, certificates, encryption keys for protecting sensitive data and other secrets in a dynamic infrastructure.
Using vault we will be retrieving the credentials from the vault key/value store
https://www.javainuse.com/spring/microservices-interview-quesions












  • Understanding the need for JSON Web Token(JWT)

    JWT stands for JSON Web Token
    It is pronounced as JAWT
    It is Open Standard - RFC7519
    JWT makes it possible to communicate securely between two bodies
    JWT is used for Authorization

In order to understand Authorization we will be taking example of user interaction with Gmail. Consider a scenario where a user wants to access his Gmail inbox page. This will involve user interaction with the Gmail server. For this the user will be sending HTTP requests to Gmail server and in response will expect the response from Gmail Server. 

The steps will be as follows-

    The user will send a http request to Gmail server with url /login. Along with this request the user will also be sending the username and password for authentication.
    The Gmail server will authenticate this request if it is successful it will return the Gmail inbox page as response to the user.
    Now suppose the user wants to access his sent mail page, so he will again send a request to the Gmail server with url /sentmails. This time he will not be sending the username and password since he user has already auntenticated himself in the first request.
    The user expects Gmail to return the sent mail page. However this will not be the case. The Gmail server will not return the sent mail page but will instead not recognize the user.

The reason for this is HTTP is a stateless protocol. As the name suggests no state is maintained by HTTP. So in case of HTTP protocol for the Gmail server each request is from a new user. It cannot distinguish between new and returning users. One solution for this issue could be to pass username and password along with each request. But then this is not a good solution.

Once a user has been authenticated. For all subsequent requests the user should be authorized to perform allowed operations

Authorization can be implemented using

    Session Management
    JSON Web Token

Drawbacks of Session Management for Authorization

    Session Id is not self contained. It is a reference token. During each validation the Gmail server needs to fetch the information corresponding to it.
    Not suitable for microservices architecture involving multiple API's and servers

Using JWT for Authorization
We make use of the JWT for authorization so the server will know that the user is already authenticated and so the user does not need to send the credentials with each and every request. 

Advantages of JWT Authorization

    JWT is self contained. It is a value token. So during each validation the Gmail server does not needs to fetch the information corresponding to it.
    It is digitally signed so if any one modifies it the server will know about it
    It is most suitable for Microservices Architecture
    It has other advantages like specifying the expiration time.


https://www.javainuse.com/webseries/spring-security-jwt/chap1


  • Understand JSON Web Token(JWT) Structure

A JWT consists of 3 parts - 

    Header
    Payload
    Signature

The 3 parts of JWT are seperated by a dot.Also all the information in the 3 parts is in base64 encoded format. 

An important point to remember about JWT is that the information in the payload of the JWT is visible to everyone. There can be a "Man in the Middle" attack and the contents of the JWT can be changed. So we should not pass any sensitive information like passwords in the payload. We can encrypt the payload data if we want to make it more secure

https://www.javainuse.com/webseries/spring-security-jwt/chap2

1 comment:

  1. Learn the basics of Microservices, Docker, and Kubernetes with us! Watch our video at Nooble to gain complete information regarding the introduction to Microservices.
    Oauth Microservices

    ReplyDelete