Monday, July 16, 2018

Microservice Design Patterns

Worth sharing !!!
The main characteristics of a microservices-based application are defined in Microservices, Monoliths, and NoOps.  They are functional decomposition or domain-driven design, well-defined interfaces, explicitly published interface, single responsibility principle, and potentially polyglot. Each service is fully autonomous and full-stack. Thus changing a service implementation has no impact to other services as they communicate using well-defined interfaces. There are several advantages of such an application, but its not a free lunch and requires a significant effort in NoOps.
But lets say you understand the required effort, or at least some pieces of it, that is required to build such an application and willing to take a jump. What do you do? What is your approach for architecting such applications? Are there any design patterns on how these microservices work with each other?
microservices-function
Functional decomposition of your application and the team is the key to building a successful microservices architecture. This allows you to achieve loose coupling (REST interfaces) and high cohesion (multiple services can compose with each other to define higher level services or application).
Verb (e.g. Checkout) or Nouns (Product) of your application are one of the effective ways to achieve decomposition of your existing application. For example, product, catalog, and checkout can be three separate microservices and then work with each other to provide a complete shopping cart experience.
Functional decomposition gives the agility, flexibility, scalability, and other *ilities but the business goal is still to create the application. So once different microservices are identified, how do you compose them to provide the application’s functionality?
This blog will discuss some of the recommended patterns on how to compose microservices together.

Aggregator Microservice Design Pattern

The first, and probably the most common, is the aggregator microservice design pattern.
In its simplest form, Aggregator would be a simple web page that invokes multiple services to achieve the functionality required by the application. Since each service (Service A, Service B, and Service C) is exposed using a lightweight REST mechanism, the web page can retrieve the data and process/display it accordingly. If some sort of processing is required, say applying business logic to the data received from individual services, then you may likely have a CDI bean that would transform the data so that it can be displayed by the web page.
Microservice Aggregator Design Pattern
Another option for Aggregator is where no display is required, and instead it is just a higher level composite microservice which can be consumed by other services. In this case, the aggregator would just collect the data from each of the individual microservice, apply business logic to it, and further publish it as a REST endpoint. This can then be consumed by other services that need it.
This design pattern follows the DRY principle. If there are multiple services that need to access Service A, B, and C, then its recommended to abstract that logic into a composite microservice and aggregate that logic into one service. An advantage of abstracting at this level is that the individual services, i.e. Service A, B, and C, and can evolve independently and the business need is still provided by the composite microservice.
Note that each individual microservice has its own (optional) caching and database. If Aggregator is a composite microservice, then it may have its own caching and database layer as well.
Aggregator can scale independently on X-axis and Z-axis as well. So if its a web page then you can spin up additional web servers, or if its a composite microservice using Java EE, then you can spin up additional WildFly instances to meet the growing needs.

Proxy Microservice Design Pattern

Proxy microservice design pattern is a variation of Aggregator. In this case, no aggregation needs to happen on the client but a different microservice may be invoked based upon the business need.
Microservice Proxy Design Pattern

Just like Aggregator, Proxy can scale independently on X-axis and Z-axis as well. You may like to do this where each individual service need not be exposed to the consumer and should instead go through an interface.
The proxy may be a dumb proxy in which case it just delegates the request to one of the services. Alternatively, it may be a smart proxy where some data transformation is applied before the response is served to the client. A good example of this would be where the presentation layer to different devices can be encapsulated in the smart proxy.

Chained Microservice Design Pattern

Chained microservice design pattern produce a single consolidated response to the request. In this case, the request from the client is received by Service A, which is then communicating with Service B, which in turn may be communicating with Service C. All the services are likely using a synchronous HTTP request/response messaging.
Microservice Chain Design Pattern
The key part to remember is that the client is blocked until the complete chain of request/response, i.e. Service <-> Service B and Service B <-> Service C, is completed. The request from Service B to Service C may look completely different as the request from Service A to Service B. Similarly, response from Service B to Service A may look completely different from Service C to Service B. And that’s the whole point anyway where different services are adding their business value.
Another important aspect to understand here is to not make the chain too long. This is important because the synchronous nature of the chain will appear like a long wait at the client side, especially if its a web page that is waiting for the response to be shown. There are workarounds to this blocking request/response and are discussed in a subsequent design pattern.
A chain with a single microservice is called singleton chain. This may allow the chain to be expanded at a later point.

Branch Microservice Design Pattern

Branch microservice design pattern extends Aggregator design pattern and allows simultaneous response processing from two, likely mutually exclusive, chains of microservices. This pattern can also be used to call different chains, or a single chain, based upon the business needs.

Microservice Branch Design Pattern

Service A, either a web page or a composite microservice, can invoke two different chains concurrently in which case this will resemble the Aggregator design pattern. Alternatively, Service A can invoke only one chain based upon the request received from the client.
This may be configured using routing of JAX-RS or Camel endpoints, and would need to be dynamically configurable.

Shared Data Microservice Design Pattern

One of the design principles of microservice is autonomy. That means the service is full-stack and has control of all the components – UI, middleware, persistence, transaction. This allows the service to be polyglot, and use the right tool for the right job. For example, if a NoSQL data store can be used if that is more appropriate instead of jamming that data in a SQL database.
However a typical problem, especially when refactoring from an existing monolithic application, is database normalization such that each microservice has the right amount of data – nothing less and nothing more. Even if only a SQL database is used in the monolithic application, denormalizing the database would lead to duplication of data, and possibly inconsistency. In a transition phase, some applications may benefit from a shared data microservice design pattern.
In this design pattern, some microservices, likely in a chain, may share caching and database stores. This would only make sense if there is a strong coupling between the two services. Some might consider this an anti-pattern but business needs might require in some cases to follow this. This would certainly be an anti-pattern for greenfield applications that are design based upon microservices.

Microservice Branch Shared Data Design Pattern

This could also be seen as a transition phase until the microservices are transitioned to be fully autonomous.

Asynchronous Messaging Microservice Design Pattern

While REST design pattern is quite prevalent, and well understood, but it has the limitation of being synchronous, and thus blocking. Asynchrony can be achieved but that is done in an application specific way. Some microservice architectures may elect to use message queues instead of REST request/response because of that.

Microservice Async Messaging Design Pattern

In this design pattern, Service A may call Service C synchronously which is then communicating with Service B and D asynchronously using a shared message queue. Service A -> Service C communication may be asynchronous, possibly using WebSockets, to achieve the desired scalability.
A combination of REST request/response and pub/sub messaging may be used to accomplish the business need.

Thursday, June 7, 2018



Java 9 Features

Nice one to share. Thanks javasampleapproach.


With Platform Module SystemNew ToolsNew Core LibrariesClient Technologies and Languages Updates …, we all will be interested in how it makes many cool things for development.
*Note: To configure your IDE for working with Java 9, please visit:
How to configure Java 9 Support for Oxygen (4.7)

I. Java Platform Module System

Java 9 Module System, which is developed under Project Jigsaw, comes to us with the specific goal: to provide reliable configuration and strong flexible encapsulation. That helps application developers, library developers, or Java SE Platform implementors more easilier create a scalable platform, make greater platform integrity, and improve performance.
What is Module?
A module is a named, self-describing collection of:
– code: packages containing types (Java classes, interfaces…)
– data: resources and other kinds of static information.
In summary:
– Class contains fields, methods.
– Package contains Classes, Enums, Interfaces, configuration files…
– Module contains Package and Other data resources.
Module system mechanism provides Readability and Accessibility that control how a module can read others and to be accessed by others.
There are three kinds of module: named moduleunnamed moduleautomatic module.
Java has a java.util.ServiceLoader class that helps to locate service providers at runtime by searching in class path. Now we can also specify service providers and users defined in modules.
>> More details at: Java 9 Module System

II. Tools

1. Jshell – The Java Shell
Java 9 provides an interactive REPL (Read-Eval-Print Loop) tool to test code snippets rapidly without a test project or main method. So we can learn or evaluate Java features easily.
Now we don’t need to create Java Project or define a public static void main(String[] args) method for testing code. Just write and run immediately.
>> More details at: Java 9 JShell – REPL
2. Unified JVM Logging
Java 9 provides a common logging system for JVM components with extremely detailed level, an infrastructure to do the logging. With new command-line option -Xlog for all logging followed settings, Unified JVM Logging gives us a precise, easy-to-configure tool to do a root cause analysis of complex system-level JVM components.
– Log messages are categorized using tags (osgcmodules…). One message can have multiple tags (tag-set).
– Logging levels: errorwarninginfodebugtrace and develop.
– Output supports 3 types: stdoutstderr, or a text file.
– Messages can be “decorated” with: timeuptimepidtidleveltags
>> More details at: Java 9 Unified JVM Logging
3. HTML5 Javadoc
Javadoc is the tool that can generate documentation for API in HTML format. In previous version of JDK, it’s HTML 4.01 – an old standard. JDK 9 Javadoc now supports to generate HTML5 markup, improves search capability and Doclint.
3.1– In JDK 9 which supports HTML5, we just need to add -html5 parameter:
html5-javadoc-generate
3.2– A search box is available on the site that can be used to search for program elements, tagged words and phrases within the documentation. The search functionality is implemented locally and not rely on any server-side computational resources.
html5-javadoc-search
3.3– -Xdoclint enables recommended checks for issues in Javadoc comments: bad references, lack of accessibility, missing comments, syntax error and missing HTML tags. By default, -Xdoclint is enabled. We can disable it by -Xdoclint:none.
This is an example for syntax check:
html5-javadoc-xdoclint
>> More details at: Java 9 HTML5 Javadoc

III. Language Updates

1. try-with-resources Improvement
Java 7 introduces a new approach for closing resources by try-with-resources statement. After that, Java 9 try-with-resources makes an improved way of writing code. Now we can simplify our code and keep it cleaner and clearer.
2. Private Interface Method
Java 8 provides 2 new features for Interface: default methods and static methods.
But, it still makes us uncomfortable because:
-> We don’t want to public that method, it is just an inner private method which handles a specific function.
-> We don’t want another interface or class which implements this interface can access or inherit that method.
Java 9 Private Interface Method solves the problems by providing a new feature for interface: private method/private static method. Now we can avoid duplicate code and keep encapsulation for interface.
>> More details at: Java 9 Private Interface Method
3. Diamond Operator
Java 7 has a new feature called Diamond Operator which helps to make code more readable, but it is still limited with Anonymous Inner Classes.
We will get the compile error: ‘<>‘ cannot be used with anonymous classes if writing code like this:
Java 9 allows the Diamond Operator for Anonymous Inner Classes:

IV. New Core Libraries

1. Process API
There are new ways of retrieving process information: all processes, current process, children processes and destroying process with Java 9 Process API.
>> More details at: Java 9 Process API
2. Platform Logging API and Service
Java 9 defines a minimal logging API which platform classes can use to log messages, together with a service interface for consumers of those messages.
An implementation of LoggerFinder is loaded with help of java.util.ServiceLoader API using system class loader. Basing on this implementation, an application/framework can plug in its own external logging backend, without configuring java.util.logging or that backend.
We can pass the class name or module (related to specific Logger) to the LoggerFinder so that the LoggerFinder can know which logger to return.
If no concrete implementation is found, JDK default LoggerFinder implementation will be used. java.util.logging (in java.logging module) now becomes backend. So log messages will be routed to java.util.logging.Logger.
We obtain loggers that are created from the LoggerFinder using factory methods of the System class:
>> More details and example at: Java 9 Platform Logging API and Service
3. CompletableFuture API Enhancements
To improve Java Future, Java 8 provides CompletableFuture which can execute some code whenever its ready. Now Java 9improves CompletableFuture API that supports delay and timeout.
4. Reactive Streams – Flow API
Java 9 introduces Reactive Streams under java.util.concurrent.Flow that supports an interoperable publish-subscribe framework. At a glance:
The diagram below shows its behavior and how to implement Reactive Stream with new Flow API:
reactive-stream-flow-interface-behavior
5. Factory Method for Collections: List, Set, Map
Java 9 provides new static factory methods for creating instances of collections and maps conveniently with small number of elements.
But the collections created with static factory method are immutable, so if we try to add/put more elements or null into them, we will get java.lang.UnsupportedOperationException or java.lang.NullPointerException.
6. Enhanced Deprecation
There are new added methods of Java 9 @Deprecated annotation: forRemoval() and since().
An example with new @Deprecated annotation:
@Deprecated(since ="1.5", forRemoval = true)
>> More details at: Java 9 @Deprecated Enhancements
7. Stack-Walking API
Java 9 provides an efficient way of stack walking for lazy access, filtering stack trace with StackWalker.
StackWalker object allows us to traverse and access to stacks. It contains some useful and powerful methods:
The most important method is walk() that helps:
+ open a StackFrame stream for the current thread.
+ then apply the function with that StackFrame stream.
>> More details at: Java 9 StackWalker
8. Other Improvements
8.1 Stream Improvements
Java 9 Stream comes with some small useful improvements for Asynchronous Programming with new added methods: iterate()takeWhile()/dropWhile()ofNullable().
>> More details at: Java 9 Stream Improvements
8.2 Optional Improvements
Java 9 provides new Optional::stream to work on Optional objects lazily, it returns a stream of either zero or one/more elements. It also checks empty element automatically and removes it.
Instead of using isPresent() and orElse() to make code more clearlier and handle “else” case, now we have ifPresentOrElse() method with Java 9:
or() method checks if a value is present, it will return an Optional for the value, otherwise return another Optional which is produced by the supplying function.
>> More details at: Java 9 Optional Improvements

V. Client Technologies

1. Multi-Resolution Images
The new API which is defined in the java.awt.image package can help us:
– Encapsulate many images with different resolutions into an image as its variants.
– Get all variants in the image.
– Get a resolution-specific image variant – the best variant to represent the logical image at the indicated size based on a given DPI metric.
>> More details at: Java 9 Multi-Resolution Images
2. TIFF Image I/O Plugins
In earlier version of Java, Image I/O Framework javax.imageio provides a standard way to plug-in image codecs for some formats such as PNG and JPEG. But TIFF is still missing from this set. It was packaged in com.sun.media.imageio.plugins.tiff before. Java 9 TIFF Image I/O plugins has a new package called javax.imageio.plugins.tiff which is renamed from com.sun.media.imageio.plugins.tiff.
The package contains some classes that support the built-in TIFF reader and writer plug-ins. It includes:
– Some classes representing common additional tags and the set of tags found in baseline TIFF specification, Exif IFD, TIFF-F (RFC 2036) file, GeoTIFF IFD.
– TIFFImageReadParam: an extension of ImageReadParam which can specify which metadata tags are allowed to be read and set some destination properties.
>> More details at: Java 9 TIFF Image I/O plugins

VI. Internationalization

1. Unicode 8.0
Java 8 supported Unicode 6.2.
Java 9 now supports up to Unicode 8.0 standards with 10,555 characters, 29 scripts, and 42 blocks.
2. UTF-8 Properties Files
In previous releases, ISO-8859-1 encoding was used when loading property resource bundles (PropertyResourceBundle – constructing its instance from an InputStream requires that the input stream be encoded in ISO-8859-1). But using ISO-8859-1 is not a convenient way to represent non-Latin characters.
In Java 9, properties files are loaded in UTF-8 encoding.
If there is an issue, consider the following options:
– convert the properties file into UTF-8 encoding.
– specify the runtime system property:
3. Default Locale Data Change
In JDK 8 and previous releases, JRE is the default locale data. JDK 9 sets CLDR (the locale data provided by the Unicode Common Locale Data Repository project) as highest priority by default.
This is how we select locale data source in the preferred order using java.locale.providers system property. If a provider is failed to request locale data, the next provider will be processed:
If we don’t set the property, default behaviour is: