Friday, 5 July 2019

Architectural Patterns


1. Layered pattern
This pattern can be used to structure programs that can be decomposed into groups of subtasks, each of which is at a particular level of abstraction. Each layer provides services to the next higher layer.

The most commonly found 4 layers of a general information system are as follows.

Presentation layer (also known as UI layer)
Application layer (also known as service layer)
Business logic layer (also known as domain layer)
Data access layer (also known as persistence layer)
Usage
General desktop applications.
E commerce web applications.

Layered pattern
2. Client-server pattern
This pattern consists of two parties; a server and multiple clients. The server component will provide services to multiple client components. Clients request services from the server and the server provides relevant services to those clients. Furthermore, the server continues to listen to client requests.

Usage
Online applications such as email, document sharing and banking.

Client-server pattern
3. Master-slave pattern
This pattern consists of two parties; master and slaves. The master component distributes the work among identical slave components, and computes a final result from the results which the slaves return.

Usage
In database replication, the master database is regarded as the authoritative source, and the slave databases are synchronized to it.
Peripherals connected to a bus in a computer system (master and slave drives).

Master-slave pattern
4. Pipe-filter pattern
This pattern can be used to structure systems which produce and process a stream of data. Each processing step is enclosed within a filter component. Data to be processed is passed through pipes. These pipes can be used for buffering or for synchronization purposes.

Usage
Compilers. The consecutive filters perform lexical analysis, parsing, semantic analysis, and code generation.
Workflows in bioinformatics.

Pipe-filter pattern
5. Broker pattern
This pattern is used to structure distributed systems with decoupled components. These components can interact with each other by remote service invocations. A broker component is responsible for the coordination of communication among components.
                                         
Servers publish their capabilities (services and characteristics) to a broker. Clients request a service from the broker, and the broker then redirects the client to a suitable service from its registry.

Usage
Message broker software such as Apache ActiveMQ, Apache Kafka, RabbitMQ and JBoss Messaging.

Broker pattern
6. Peer-to-peer pattern
In this pattern, individual components are known as peers. Peers may function both as a client, requesting services from other peers, and as a server, providing services to other peers. A peer may act as a client or as a server or as both, and it can change its role dynamically with time.

Usage
File-sharing networks such as Gnutella and G2)
Multimedia protocols such as P2PTV and PDTP.

Peer-to-peer pattern
7. Event-bus pattern
This pattern primarily deals with events and has 4 major components; event source, event listener, channel and event bus. Sources publish messages to particular channels on an event bus. Listeners subscribe to particular channels. Listeners are notified of messages that are published to a channel to which they have subscribed before.

Usage
Android development
Notification services

Event-bus pattern
8. Model-view-controller pattern
This pattern, also known as MVC pattern, divides an interactive application in to 3 parts as,

model — contains the core functionality and data
view — displays the information to the user (more than one view may be defined)
controller — handles the input from the user
This is done to separate internal representations of information from the ways information is presented to, and accepted from, the user. It decouples components and allows efficient code reuse.

Usage
Architecture for World Wide Web applications in major programming languages.
Web frameworks such as Django and Rails.

Model-view-controller pattern
9. Blackboard pattern
This pattern is useful for problems for which no deterministic solution strategies are known. The blackboard pattern consists of 3 main components.

blackboard — a structured global memory containing objects from the solution space
knowledge source — specialized modules with their own representation
control component — selects, configures and executes modules.
All the components have access to the blackboard. Components may produce new data objects that are added to the blackboard. Components look for particular kinds of data on the blackboard, and may find these by pattern matching with the existing knowledge source.

Usage
Speech recognition
Vehicle identification and tracking
Protein structure identification
Sonar signals interpretation.

Blackboard pattern
10. Interpreter pattern
This pattern is used for designing a component that interprets programs written in a dedicated language. It mainly specifies how to evaluate lines of programs, known as sentences or expressions written in a particular language. The basic idea is to have a class for each symbol of the language.

Usage
Database query languages such as SQL.
Languages used to describe communication protocols.

Interpreter pattern
Comparison of Architectural Patterns
The table given below summarizes the pros and cons of each architectural pattern.

Enterprise Integration Patterns


Message Channel (from Messaging Systems)
A message channel is a logical channel which is used to connect the applications. One application writes messages to the channel and the other one (or others) reads that message from the channel. Message queue and message topic are examples of message channels.


Message Translator (from Messaging Systems)
Message translator transforms messages in one format to another. For example one application sends a message in XML format, but the other accepts only JSON messages so one of the parties (or mediator) has to transform XML data to JSON. This is probably the most widely used integration pattern.


Publish-Subscribe Channel (from Messaging Channels)
This type of channel broadcasts an event or notification to all subscribed receivers. This is in contrast with a point-to-point channel . Each subscriber receive the message once and next copy of this message is deleted from channel. The most common implementation of this patter is messaging topic.


Dead Letter Channel (from Messaging Channels)
The Dead Letter Channel describe scenario, what to do if the messaging system determines that it cannot deliver a message to the specified recipient. This may be caused for example by connection problems or other exception like overflowed memory or disc space. Usually, before sending the message to the Dead Letter Channel, multiple attempts to redeliver message are taken.

Correlation Identifier (from Message Construction)
Correlation Identifier gives the possibility to match request and reply message when asynchronous messaging system is used. This is usually accomplished in the following way:
Producer: Generate unique correlation identifier.
Producer: Send message with attached generated correlation identifier.
Consumer: Process messages and send reply with attached correlation identifier given in request message.
Producer: Correlate request and reply message based on correlation identifier.

Content-Based Router (from Message Routing)
Content-Based Router examines message contents and route messages based on data contained in the message.


Content Enricher (from Message Transformation)
Content Enricher as the name suggests enrich message with missing information. Usually external data source like database or web service is used.

Event-Driven Consumer (from Messaging Endpoints)
Event-Driver Consumer enables you to provide a action that is called automatically by the messaging channel or transport layer. It is asynchronous type of pattern because receiver does not have a running thread until a callback thread delivers a message.

Polling Consumer (from Messaging Endpoints)
Polling Consumer is used when we want receiver to poll for a message, process it and next poll for another. What is very important is that this pattern is synchronous because it blocks thread until a message is received. This is in contrast with a event-driven consumer. An example of using this pattern is file polling.

Wire Tap (from System Management)
Wire Tap copy a message and route it to a separate channel, while the original message is forwarded to the destination channel. Usually Wire Tap is used to inspect message or for analysis purposes.


API Best Practices


API (Application programming interface)
Type of API
  • Open APIs
  • Partner APIs
  • Internal APIs
  • Both Open APIs and partner APIs
Designing and Implementation the API
  • Resources, CRUD implementation
  • Error Handling, Protocols and protocol status codes
  • Change management & Versioning
  • Pagination, Partial responses 
  • Service URL or Endpoints
  • Methods
    • GET 
    • POST 
    • PUT
    • DELETE 
API Management Tools
  • Apigee.
  • IBM API management.
  • Microsoft’s AZURE API management.
  • MuleSoft’s Anypoint platform for APIs.
API Management
  • API Gateway service.
  • Developer portal.
API Securit
Vendors Communication
  • RAML(RESTful API modeling language)
  • Swagger
  • WADL (Web Application Description Language)
  • WSDL (Web Services Description Language)

Friday, 3 June 2016

Deployment Policy in Datapower

1.       What is Deployment Policy?
        An object in Datapower used to modify/filter imported configurations. When we import object(s) from one domain or environment to another, we may want to filter out or change       certain object configurations for the new domain or environment. This can be achieved using         (DP)Deployment Policy.

2.       How does DP work?
      Deployment Policy works through rules.  A DP may contain three types of rules-
      a) Accepted Configuration - Only the matching configuration is accepted during import.
      b) Filtered Configuration - The matching configuration is excluded during import.
      c) Modified Configuration - The matching configuration is altered during import.

3.       Describe configuration of Deployment policy along with each type of rule mentioned above.

      To create a DP, follow the below steps:
i)       In the left search panel in Datapower, type Deployment Policy .
ii)     The below window opens. Go to Add.
iii)   In the Main tab, enter a name for the Deployment Policy. Write comment (optional) for this DP.
a)      Accepted Configuration-
Accept configuration rules are used to import ONLY the configurations that match the accept rule. Rest all other configurations, if being imported along with this DP, will be ignored and will NOT be imported. As an example, let us assume we want to import ONLY a crypto cert named, myCert.
We would need to provide exact configuration match corresponding to myCert.
It is a good practice to use match builder to write exact matching configuration.

Go to Accepted Configuration -> Build
A Match Builder dialog box appears where details need to be entered for an exact configuration path. The below six fields need to be filled –

·         Device Address - Identifies the local management IP address of the device to which the statement is applied. Leave blank (or ‘*’) for all.
·         Application Domain - Identifies the application domain to which the statement is applied. Select (none) for all.
·         Resource Type – Identifies the name of the resource type, eg. Crypto certificate, web service proxy etc. To match all types, select '(all resources)'.
·         Name Match (PCRE) - Limits the statement to resources with the specified names. Use a PCRE to select groups of resource instances. (PCRE refers to regular expression -  eg. myCer*)
·         Configuration Property- Limits the statement to the configuration property with the specified name.
·         Configuration Value Match (PCRE) – Refers to configuration with matching property values. Use a PCRE Match Expression to select groups of configuration property values.


For this scenario, where we need only myCert to be imported from the entire set of objects, we would mention as below-

Since we need to import the entire myCert object, and not just a particular property, we would leave the last two options blank, as shown above.

Once we save, the below Accepted Configuration expression would be formed –
Next, Apply and save configuration.

This gives a Deployment Policy DPDemo which needs to be included along with imported object(s) to get only myCert imported during importing configuration.

b)      Filtered Configuration –
In case when certain objects/configurations need to be filtered out during import, this property would be used. The configuration of Filtered Configuration expression is similar to the way we saw in Accepted Configuration. Kindly refer the process explained above (point a) for the same.

c)       Modified Configuration –
As described above, Modified Configuration property would be used when the requirement is to change or modify certain properties/configurations in the imported object(s).

There is a separate tab for Modified Configuration.
As an example, suppose the requirement is to change the port number of the Front Side Handler (myFSH) getting imported from a different domain or environment.
Let the existing port number be 4788 and it needs to be changed to 4789.
Following steps would be followed to get the correct  expression:
1.       Go to Modified Configuration tab and click Add.

In the edit modified configuration window, entry is required.

2.       Go to Build to create a proper configuration match expression. Make the below entry –
The entry made, shows the existing FSH property that needs to be changed. Save the changes. The resultant expression is : */*/protocol/http?Name=myFSH&Property=LocalPort&Value=4788

3.       Next, in Modification Type, there are three options –
·         Add Configuration- To add a new property  to the existing configuration
·         Change Configuration- To change an existing configuration
·         Delete Configuration- To delete an existing configuration

Here, in this example we need to change the existing configuration. So we follow the below steps. (In case one needs to add or delete, similar steps need to be followed. The options are self-explanatory.)

Next, Apply.

The below expression is formed. Apply and Save.
*/*/protocol/http?Name=myFSH&Property=LocalPort&Value=4788



Thus we get a new Deployment Policy DPDemo which should be included in the imported configuration to get the desired change.

Tuesday, 24 May 2016

Datapower Deployment Scenarios

This article explains on the Datapower Deployment Scenarios and their specific functions inside an enterprise environment.
1. Lab environment – this is an isolated environment that allows testing of any major new firmware release features to be tested without any impact on ongoing development streams. This assist change management of new features and testing new feature before implementation.
2. Development environment – this is an very common practice to isolate Datapower service development to a dedicated environment. This will usually be an single appliance for developers as an black sandbox to develop as well as do project-specific configurations.
3. Testing environment – this is an isolated test environment from the development environment mentioned above. This environment is used to test all developed services. The appliance provide easy to service migration between appliances and domains.
4. Staging environment – the environment allows testing pre-releases, or rolling new releases into production. The environment is used to do performance testing to determine sizing and scaling of production appliances.
5. Production environment – appliances in the production environment can be deployed as a cluster in an active/passive configuration or active/active configuration. Appliances can balance traffic to target servers using the Application Optimization feature.

6. DR environment – Many organization require full data center failover to a second fully equipped site. The DR environment will provide failover appliance.

OAuth Implementation in Datapower XI52

OAuth is an authorization framework that allows a resource owner to grant permission to access their resources without sharing their credentials with a third party. Traditionally in client-server authentication model, the client uses its credentials to access its resources hosted by the server. OAuth introduces a third role called the resource owner. In this model, the client (which is not the resource owner, but is acting on its behalf) requests access to resources controlled by the resource owner, but hosted by the server.
Below figure shows OAuth in Datapower,
Description: OAUTH
How to Use It?
To use the OAuth protocol, we need an AAA policy. The AAA policy must be defined in a processing rule of Web Token Service or Multi-Protocol Gateway.After successfully generating an access token, processing returns a node set that becomes part of the JSON object that contains the access token and optionally a refresh token.
1.     When configured through a Web Token Service, the service supports as authorization server endpoints
2.     When configured through a Multi-Protocol Gateway, the service supports as authorization server endpoints and enforcement point for resource servers.
Types of Protocol Flow
1.     Three Legged Flow – There are 3 entities namely the resource owner, an OAuth client who wants to access the resource, the resource server. The resource owner does not share its credentials to client instead it gives authorization grant and a client ID by an authorization service. Using authorization grant, OAuth client request an access token from authorization service to access the resource on resource server.
2.     Two Legged Flow – There are only 2 entities where the resource owner and OAuth client overlaps. The client needs resource owner credentials or a client credentials grant type. There are four credential grant type scenarios.
1. Implementation (Two legged Flow) through WTS (Grant type scenario- resource owner password credentials)
This scenario involves exchanging the resource owner’s username and password for an access token and make use of that token to access resources, which means the client possess the resource owner’s credentials. Client application sends a token request containing the resource owner username and password as well as its information to an authorization server and receives an access token. When acting as an authorization server, Datapower accepts and verifies an OAuth request and generates an access token. When acting as an enforcement point for a resource server, Datapower verifies the access token and validates it against the resource the client is requesting. Below steps describes this implementation,
STEP 1 Create an OAuth client object as below,
Description: OAuthProf
Description: outhProf
Description: oauthProf1
STEP 2 Create an OAuth client group object as below,
Description: oauthgrp
STEP 3 Create a AAA Policy with OAuth Config,
3 A. Extract Identity (EI)
Description: aaa1
Description: aaa2
3 B. Authentication (AU)
Description: aaa3
3 C. Resource Extraction(ER)
Description: aaa4
3 D. Authorization (AZ)
Description: aaa5
Sample AAA file,
Description: AAA6
STEP 4 Create an authorization server using the Web Token Service as below,
4 A.WTS Config,
Description: WTSOauth
4 B.WTS generated Processing Policy,
Description: WTSPolicy
STEP 5 Create a resource server enforcement point using the Multi-Protocol Gateway as below,
5 A. Front Side Handler Config,
Description: FSH5041
Description: FSH5041_1
5 B. Processing policy for enforcement,
Description: MPG_PP
5 C. choose a static backend type, here it is a simple loopback XML Firewall. Request type as non-xml, Response type as Pass through and propagate URI to on.
STEP 6 Testing the Configuration,
6 A. Client request to invoke the authorization service and fetch access token (Web token service)
Curl command,
curl -k https://<dp:ip>:5040/ –user password-client:passw0rd -d “grant_type=password&username=john&password=passW&scope=/getAccount {“token_type”:”bearer”,”access_token”:”bearer”,”expires_in”:3600 }”
Description: curl1
6 B. Access token verified by the MPGW (Resource Service)
Curl command,
curl -k https://<dp:ip>:5041/getAccount -H “Authorization: Bearer AAEPcGFzc3dvcmQtY2xpZW50tOItBbbumS0yrr/H+fPT8VbAvrI3xX55MSfy7Pnjz07usWpDnPm+0evCFTcPjtUwt5SrAhdplb3QnH+Dy36pCg”
Description: curl2
2.Implementation (Two legged Flow) through WTS (Grant Type Scenario-Client Credential)
In this scenario client itself the resource owner, which means client can get an access token by presenting its own credentials, avoiding its credentials from being exposed in each and every resource request.However,client credentials flow implementation is similar to resource owner password credentials configuration,
STEP 1 Configure the OAuth client application with the Datapower,
1 A.Configure an OAuth Client Profile similar to above implementation except for the grant type as below,
Description: p1
1 B. Configure the created client profile with OAuth Client Group,
Description: p2
1 C. Use the OAuth client group in AAA policy to implement in Authorization service,
Authentication(AU)
Description: p3
Resource Extraction(ER)
Description: p4
Authorization (AZ)
Description: p5
1 D. Use the OAuth client group in another AAA policy(below Config) to implement in Enforcement service,
Resource Extraction(ER)
Description: p6
Authorization(AZ)
Description: p7
STEP 2 Create an authorization service using the Web Token Service,
Create a WTS with SSL proxy profile and get the processing policy generated with AAA action that has OAuth Client configured earlier,
Description: p8
STEP 3 Create a resource server enforcement point using the Multi-Protocol Gateway,
3 A. Create a front side handler that listens the WTS Authorization service,
Description: p9
3 B. Create a processing policy for the enforcement point having the,
Description: p10
3 C. Choose a static backend type, here it is a simple loopback XML Firewall. Request type as non-xml, Response type as Pass through and propagate URI to on.
Troubleshooting the Configurations,
Scenario 1: Scope not sufficient
Request to invoke WTS Authorization service and get access token,
Curl Command,
curl -k https://<dp:ip>:5050/token -u ac-appln:passw0rd -d “grant_type=client_credentials&scope=getAccount
Response,
Description: curl1
Request with access token to enforcement service to get the response,
Curl Command,
curl -k https://<dp:ip>:5051/getAccount -H “Authorization: Bearer AAEIYWMtYXBwbG5NYwZ3Pi01cM3Rge0qjdvfh4VJ19KCNQoEfkbyiiXvCw9oZLfPdwC7OarGlixMcD46KqjChfuuHB6U/vpL7lWp”
Response,
Description: curl2
We get the response as insufficient_scope because URL sent by client is checked for the ER phase for the endpoint server AAA policy, here the scope in the token request includes the leading slash.
Scenario 2: Invalid Client Application
Curl Command,
curl -k https://<dp:ip>:5050/token -d “grant_type=client_credentials&scope=/getAccount&client_id=application&client_secret=passw0rd”
Response,
Description: cul3
Scenario 3: Success
Curl Command,
curl -k https://<dp:ip>:5050/token -u ac-appln:passw0rd -d “grant_type=client_credentials&scope=getAccount”
Description: curl4
Request to invoke enforcement service(MPG),
curl -k https://<dp:ip>:5051/getAccount -H “Authorization: Bearer AAEIYWMtYXBwbG5NYwZ3Pi01cM3Rge0qjdvfh4VJ19KCNQoEfkbyiiXvCw9oZLfPdwC7OarGlixMcD46KqjChfuuHB6U/vpL7lWp
Response,
Description: curl5