Microservices for eCommerce Success
Microservices for eCommerce Success
Foreword 2
Divide and conquer 3
Omnichannel 5
Evolutionary approach 16
Best practices 21
Create a Separate Database for Each Service 22
Deploy in Containers 24
Mobile Commerce 31
Related techniques and patterns 41
Fundamentals of distributed systems 42
Microservices Architecture
Design patterns
Integration techniques
44
52
for eCommerce
Deployment of microservices
74
Continuous Deployment 82
Name a technology conference or meetup and I’ll tell you about the
constant speeches referencing microservices. This modern engineering
technique has grown from good old SOA (Service Oriented Architecture)
with features like REST (vs. old SOAP) support, NoSQL databases and the
Event driven/reactive approach sprinkled in.
Go to Table of Contents 2
Divide and conquer
The original Zalando site was built on Magento using PHP, and at one
time was the biggest Magento site in the world. The German eCommerce
giant that employs over 10,000 people and ships more than 1,500 fashion
brands to customers in 15 European countries generated $3.43 billion in
revenue last year. With over 700 people on its engineering team, they
moved to microservices in 18 months.
Go to Table of Contents 3
the basis of microservices you’ll have separate teams accountable for
particular KPIs, providing SLA’s for their parts, etc. A side effect of this
approach is usually the rise of employee effectiveness and engagement.
— M. CONWAY
Go to Table of Contents 4
Among all the technical challenges, microservices usually require
organizational changes inside the company. Breaking the technical
monolith quite often goes hand in hand with dividing enterprise
departments into agile, rapid teams to achieve faster results. In the end,
the final outcome is that processes that took a few months can now be
executed in weeks and everybody feels engaged. It’s something you
cannot underestimate.
Omnichannel
This eBook will try to help you decide if it is time for applying this
approach and how to start by referencing to few popular techniques
and tools worth following.
Go to Table of Contents 5
About the authors
Go to Table of Contents 6
Piotr Karwatka
CTO at Divante. My biggest project? Building the company from 1 ->
150+ (still growing), taking care of software productions, technology
development and operations/processes. 10+ years of professional
Software Engineering and Project Management experience. I've also tried
my hand at writing, with the book "E-Commerce technology for
managers". My career started as a software developer and co-creator of
about 30 commercial desktop and web applications.
Michał Kurzeja
CTO and Co-Founder of Accesto with over 8 years of experience in
leading technical projects. Certified Symfony 3 developer. Passionate
about new technologies; mentors engineers and teams in developing
high-quality software. Co-orgaizer of Wrocław Symfony Group meetups.
Mariusz Gil
Software Architect and Consultant, focused on high value and high
complexity, scalable web applications with 17+ years of experience in the
IT industry. Helps teams and organizations adopt good development and
programming practices. International conference speaker and developer
events organizer.
Bartosz Picho
eCommerce Solution Architect, responsible for Magento 2 technology at
Divante. Specialized in application development end 2 end: from
business requirements to system architectures, meeting high
performance and scalability expectations. Passionate technologist,
experienced in Magento 1 and 2, both Community and Enterprise
editions.
Go to Table of Contents 7
Antoni Orfin
Solutions Architect specialized in designing highly-scalable web
applications and introducing best practices into the software
development process. Speaker at several IT conferences. Currently
responsible for systems architecture and driving DevOps methodology at
Droplr.com.
Mike Grabowski
Software Developer and open source enthusiast. Core contributor to
many popular libraries, including React Native, React Navigation and
Haul. Currently CTO at Callstack.io. Travels the world teaching
developers how to use React and shares his experience at various
React-related events.
Paweł Jędrzejewski
Founder and Lead Developer of Sylius, the first Open Source eCommerce
framework. Currently busy building the business & ecosystem around the
project while also speaking at international tech conferences about
eCommerce & APIs.
Alexander Graf
Co-Founder and CEO of Spryker Systems. Alexander Graf (*1980) is one
of Germany’s leading eCommerce experts and a digital entrepreneur of
more than a decade’s standing. His widely-read blog Kassenzone (“The
Check-Out Area”) has kicked off many a debate among commerce
professionals. Alexander wrote Appendix 1 to this book.
Go to Table of Contents 8
Aknowledgement
9
Table of contents
Foreword 2
Divide and conquer 3
Omnichannel 5
Evolutionary approach 16
Best practices 21
Create a Separate Database for Each Service 22
Deploy in Containers 24
Integration techniques 39
Deployment of microservices 50
Continuous Deployment 69
Related technologies 72
Microservices based e-commerce platforms 73
Go to Table of Contents 10
Microservices
Go to Table of Contents 11
Microservices
Go to Table of Contents 12
The assumptions of the orthogonal architecture followed by
microservices architects implies the following benefits:
• By defining strict protocols (API), services are easy to test and extend
into the future.
² https://www.nginx.com/blog/microservices-at-netflix-architectural-best-practices/
Go to Table of Contents 13
Dockerization of IT environments, monitoring tools and DevOps tools
(Ansible, Chef, Puppet and others) can take your development team to
the the next level of effectiveness.
A B
Development
Cross-functional team Cross-functional team
The criticism
Go to Table of Contents 14
• The architecture introduces additional complexity and new problems
to deal with, such as network latency, message formats, load
balancing, fault tolerance and monitoring. Ignoring one of these
belongs to the "fallacies of distributed computing”.
• Starting with the microservices approach from the beginning can lead to
too many services, whereas the alternative of internal modularization
may lead to a simpler design.
Go to Table of Contents 15
Evolutionary
approach
Go to Table of Contents 16
Evolutionary approach
Almost all the cases where I've heard of a system that was built
as a microservice system from scratch, has ended up in
serious trouble.
³ https://martinfowler.com/articles/microservices.html
17
External Systems
ESB ...
Magento
When you begin a new application, how sure are you that it will be useful
to your users? Starting with microservices from day one may significantly
complicate the system. It can be much harder to pivot if something didn’t
go as planned (from the business standpoint). During this first phase you
need to prioritize the speed of development to basically figure out what
works.
Go to Table of Contents 18
External Systems XYZ Client
API Consumers
ERP ...
API Gateway
Micro Services
Message Broker
PRICE CRM OMS
...
Fig. 3: The very same system but after architecture re-engineering; now the system core
is built upon 10 microservices.
Many successful eCommerce businesses (if not all of them!) started from
monolithic, at some point, all-in-one platforms before transitioning into a
service oriented architecture.
Go to Table of Contents 19
The most common reasons we’ve seen to initialize a transformation
are the following:
Go to Table of Contents 20
Best practices
Go to Table of Contents 21
Best practices
This eBook is intended to show you the most popular design patterns and
practices related to microservices. I strongly recommend you to track the
father of the micro services approach - Sam Newman. You should check
out websites like: http://microservices.io, https://dzone.com/ and
https://github.com/mfornos/awesome-microservices
(under the “microservices” keyword). They provide a condensed dose of
knowledge about core microservice patterns, decomposition methods,
deployment patterns, communication styles, data management and
much more…
Go to Table of Contents 22
TRADITIONAL APPLICATION
3-Tier Approach
OR
MICROSERVICES APPROACH
Presentation services
UI
Stateful services
Go to Table of Contents 23
Rely on Contracts Between Services
Keep all code at a similar level of maturity and stability. When you have to
modify the behaviour of a currently deployed (and stable) microservice,
it’s usually better to put the new logic into a new, separate service. It’s
sometimes called “immutable architecture”.
Deploy in Containers
Go to Table of Contents 24
COMPOSE .yml Description Docker CLI
SWARM
Swarm Swarm
CLUSTER
Cluster Manager 1 Cluster Manager 2
MANAGERS
Fig. 5: Source - Docker Blog. Docker Swarm manages the whole server cluster -
automatically deploying new machines with additional instances for scalability and high
availability. Of course it can be deployed on popular cloud environments like Amazon.
They all perform the same function, so you don’t need to be concerned
with them individually. The role configuration across servers must be
aligned and the deployment process should be fully automated.
Go to Table of Contents 25
A microservices
A monolithic application architecture puts each
puts all its functionality element of functionality
into a single process... into a separate service ...
Go to Table of Contents 26
Related techniques
and patterns
Go to Table of Contents 27
Related Techniques and Patterns
CAP theorem
Also called “Brewer theorem” after Eric Brewer, states that, for distributed
systems it’s not possible to provide more than two of the following three
guarantees:
Go to Table of Contents 28
When the system is running normally - both availability and consistency
can be provided. In case of failure, you get two choices:
• Raise an error (and break the availability promise) because it’s not
guaranteed that all data replicas are updated.
• Provide the user with cached data (due to the very same reason as
above).
Eventual consistency
It’s not a programming technique but rather something you have to think
about when designing distributed systems. This consistency model is
connected directly to the CAP theorem and informally guarantees that if
no new updates are made to a given data item, eventually all access
to that item will return the last updated value.
6 https://en.wikipedia.org/wiki/ACID
Go to Table of Contents 29
To achieve eventual consistency, the distributed system must resolve data
conflicts between multiple copies of replicated data. This usually consists
of two parts:
The widespread model for choosing the final state is “last writer wins” -
achieved by including an update timestamp along with an updated copy
of data.
Design patterns
Having knowledge of the core theories that underpin the issues which we
may encounter when developing and designing a distributed
architecture, we can now go into higher-level concepts and patterns.
Design patterns are techniques that allow us to compose code of our
microservices in a more structured way and facilitate further maintenance
and development of our platform.
CQRS
Go to Table of Contents 30
In CQRS, write requests (aka commands) and read requests (aka queries)
are separated into different models. The write model will accept
commands and perform actions on the data, the read model will accept
queries and return data to the application UI. The read model should be
updated if, and only if, the write model was changed. Moreover, single
changes in the write model may cause updates in more than one read
model. What is very interesting is that there is a possibility to split data
storage layers, set up a dedicated data store for writes and reads, and
modify and scale them independently.
For example, all write requests in the eCommerce application, like adding
a new order or product reviews, can be stored in a typical SQL database
but some read requests, like finding similar products, can be delegated
by the read model to a graph engine.
Pros:
Go to Table of Contents 31
Cons:
Service Interface
Query Model
query
model
reads from
database
application
routes
change
Command Model information
command to command
model model
updates
database
command model
executes validations,
and consequential
logic
Go to Table of Contents 32
UI
Fig. 9: CQRS architecture (https://martinfowler.com/bliki/images/cqrs/cqrs.png).
Event Sourcing
Data stores are often designed to directly keep the actual state of the
system without storing the history of all the submitted changes. In some
situations this can cause problems. For example, if there is a need to
prepare a new read model for some specific point of time (like your
current address on an invoice from 3 months ago - which may have
changed in the meantime - and you haven’t stored the time-stamped data
snapshots, it will be a big deal to reprint or modify the correct document).
Go to Table of Contents 33
PRESENTATION
Cart
Cart Item
Item 1 added Cart ID
Cart ID
Date
Item 2 added Item key
Customer
EXTERNAL
Item name SYSTEMS AND
Address
APPLICATIONS
Item 1 removed Quantity
...
...
Shipping information added
MATERLIALIZED VIEW
Published events
Event store
• OrderCreated
• OrderApproved
• OrderPaid
• OrderPrepared
• OrderShipped
• OrderDelivered
Go to Table of Contents 34
During the recreation phase, all events are fetched from the EventStore
and applied to a newly constructed entity. Each applied event changes
the internal state of the entity.
But there are also some potential drawbacks here… How can we get the
current states of tens of objects? How fast will object recreation be if the
events list contains thousands of items?
If the event history of the entity is long, the application may also create
some snapshots. By “snapshot”, I mean the state of the entity after every
n-th event. The recreation phase will be much faster because there is no
need to fetch all the changes from the Event Store, just the latest
snapshot and further events.
Go to Table of Contents 35
User Interface
Domain Domain
Model Model
Data
Domain
Model
Event Bus
Event
store
Go to Table of Contents 36
Event Sourcing works very well with CQRS and Event Storming, a
technique for domain event identification by Alberto Brandolini. Events
found with domain experts will be published by entities inside the write
model. They will be transferred to a synchronous or asynchronous event
bus and processed by event handlers. In this scenario, event handlers will
be responsible for updating one or more read models.
Pros:
Cons:
Go to Table of Contents 37
One of the best solutions is simply using events. If anything important
happened inside a microservice, a specific event is published to the
message broker. Other microservices may connect to the message broker,
receive, and consume a dedicated copy of that message. Consumers may
also decide which part of the data should be duplicated to their local
store.
Go to Table of Contents 38
is a Data will be synchronized in the future but you can also stop some
services and you will never lose your data. They will be processed when
services are restored.
Integration techniques
Go to Table of Contents 39
API Gateways
With the microservices approach, it’s quite easy to make internal network
communication very talkative. Nowadays, when 10G network connections
are standard in data-centers, there may be nothing wrong with that. But
when it comes to communication between your mobile app and backend
services, you might want to compress as much information as possible
into one request.
With almost no business logic included, gateways are an easy and safe
choice to optimize communication between frontend and backend or
between different backend systems.
Go to Table of Contents 40
View Controller Single
entry poiont
Product Info
Model REST
Service
Traditional server-side
web application
API Recommendation
REST
Gateway Service
Browser/Native App
Fig. 12: Using an API gateway you can compose your sub-service calls into easy to
understand and easy to use facades. Traffic optimization, caching and authorization are
additional benefits of such an approach
Additionally, you can provide common authorization layers for all services
behind the gateway. For example - that’s how Amazon API Gateway
Service7 + Amazon Cogito8 work.
7 https://aws.amazon.com/api-gateway/
8 http://docs.aws.amazon.com/cognito/latest/developerguide/authentication-flow.html
41
Mobile Apps
AWS Lambda
functions
Receive incoming Check throttling
request configuration
Execute
Check for Item in Check current RPS backend Endpoints on
dedicated cache rate call Amazon EC2
Websites Internet If found return cached If above allowed ratio
Item return 429
Services
Amazon Cloud Watch
Swagger9 can help you, once a Gateway has been built, with direct
integration and support to Amazon services.
• If you manage to have a few micro services behind your facade, you can
avoid network latency - which is especially important on mobile devices.
9 http://docs.aws.amazon.com/cognito/latest/developerguide/authentication-flow.html
42
Using a facade, you can hide all network traffic between services
executing the sub-calls in internal networks from the end-client.
• Then you can optimize your calls to be more compliant with a specific
domain model. You can model the API structures by merging and
distributing subsequent service calls instead of pushing this logic to the
API client’s code.
Team A Team B
Mobile
App
iOS Android
App App
API Team
Team C Team D
Team A Team B
Fig. 14: Backend for frontends architecture is about minimizing the number of backend
calls and optimizing the interfaces to a supported device.
Go to Table of Contents 43
There are many approaches to separate backend for frontends and
roughly speaking it always depends on the differences in data required by
a specific frontend, or usage-patterns behind specific API clients. One can
imagine a separate API for frontend, mobile apps - as well as separate
interfaces for iOS and Android if there are any differences between these
applications regarding how service calls are made or their respective data
formats.
One of the concerns of having a single BFF per user interface is that you
can end up with lots of code duplication between the BFFs themselves.
Pete Hodgson (ex. Thought Works) suggests that BFFs work best when
organized around teams. The team structure should drive how many BFFs
you have. This is a pragmatic approach to not over-engineer your system
but rather have one mobile API if you have one mobile team etc.
10 http://samnewman.io/patterns/architectural/bff/
Go to Table of Contents 44
Token based authorization (oauth2, JWT)
Facebook or Google Account login screens are a well known part of oauth
authorization.
11 http://openid.net/
Go to Table of Contents 45
Fig. 15: Authorization screen for Google Accounts to authorize external application to
use Google APIs in the name of the user.
Authorization tokens are issued for a specific amount of time and should
be invalidated afterwards. Token authorization is 100% stateless; you
don’t have to use sessions (like with good, old session based
authorization)12. OAuth 2.0 requires SSL communication and avoids
additional request-response signatures required by the previous version
(requests were signed using HMAC algorithms); also, the workflow was
simplified with 2.0 removing one additional HTTP request.
12 http://stackoverflow.com/questions/7561631/oauth-2-0-benefits-and-use-cases-why
Go to Table of Contents 46
BROWSER APPLICATION AUTHORIZATION SERVER RESOURCE SERVER
2: Request Login
OAuth tokens don’t push you to display the authentication dialog each
time a user requires access to their data. Following this path would make
it impossible to check e-mail in the background or do any batch
processing operations. So how to deal with such background-operations?
You should use “offline” tokens13 - which are given for longer time periods
and can also be used to remember client credentials without requiring
login/password each time the user hits your application.
There is usually no need to rewrite your own OAuth code as many open
source libraries are available for most OAuth providers and frameworks.
Just take a look on Github!
13 https://auth0.com/docs/tokens/refresh-token
Go to Table of Contents 47
There are SaaS solutions for identity and authorization, such as Amazon
Cogito14 or Auth015 that can be easily used to outsource the authorization
of your API’s.
JWT are self-contained which means that tokens contain all the
information. They are encoded and signed up using HMAC.
This allows you to fully rely on data APIs that are stateless and even make
requests to downstream services. It doesn't matter which domains are
serving your APIs, so Cross-Origin Resource Sharing (CORS) won't be an
issue as it doesn't use cookies17.
14 https://aws.amazon.com/cognito/
15 https://auth0.com/how-it-works
16 https://jwt.io/
17 https://jwt.io/introduction/
48
Validation of HMAC tokens18 requires the knowledge of the secret key
used to generate the token. Typically the receiving service (your API) will
need to contact the authentication server as that server is where the
secret is being kept19.
Example token:
eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxM-
jM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiYWRtaW4iOn
RydWV9.TJVA95OrM7E2cBab30RMHrHDcEfxjoYZgeFONFh
7HgQ
Header {
(algorithm and token type) "alg": "HS256",
"typ": "JWT"
}
Payload {
(data) "sub": "1234567890",
"name": "John Doe",
"admin": true
}
Signature HMACSHA256(
base64UrlEncode(header) + "." +
base64UrlEncode(payload),
) secret base64 encoded
18 https://en.wikipedia.org/wiki/Hash-based_message_authentication_code
19 https://jwt.io/introduction/
Go to Table of Contents 49
JWT tokens are usually passed by the HTTP Bearer header, then stored
client side using localStorage or any other resource. Tokens can be
invalidated at that time (exp claim included into token).
Once returned from authorization, service tokens can be passed to all API
calls and validated server side. Because of the HMAC based signing
process, tokens are safe.
BROWSER SERVER
2. Creates a JWT
3. Returns the JWT to the Browser with a secret
Fig. 17: JWT based authorization is pretty straight forward and it’s safe. Tokens can be
trusted by authorized parties because of the HMAC signature; therefore information
contained by them can be used without checking ACL’s and any further permissions.
Deployment of microservices
18 https://en.wikipedia.org/wiki/Hash-based_message_authentication_code
19 https://jwt.io/introduction/
Go to Table of Contents 50
Nowadays, we see two main concepts that facilitates such a process -
containerization and serverless architecture.
If you are not familiar with containerization, then here are the most
common benefits that make it worth digging deeper into this concept:
• Docker allows you to build an application once and then execute it in all
your environments no matter what the differences between them.
• Docker helps you to solve dependency and incompatibility issues.
• Docker is like a virtual machine without the overhead.
• Docker environments can be fully automated.
• Docker is easy to deploy.
• Docker allows for separation of duties.
• Docker allows you to scale easily.
• Docker has a huge community.
This might sound familiar: virtualization allows you to achieve pretty much
the same goals but in contrast to virtualization, Docker runs all processes
directly on the host operating system. This helps to avoid the overhead of
a virtual machine (both performance and maintenance).
Go to Table of Contents 51
Docker achieves this using the isolation features of the Linux kernel such
as Cgroups and kernel namespaces. Each container has its own process
space, filesystem and memory. You can run all kinds of Linux distributions
inside a container. What makes Docker really useful is the community and
all projects that complement the main functionality. There are multiple
tools to automate common tasks, orchestrate and scale containerized
systems. Docker is also heavily supported by many companies, just to
name a couple: Amazon, Google, Microsoft. Currently, Docker also allows
us to run Windows inside containers (only on Windows hosts).
Docker basics
Before we dig into using Docker for the Microservices architecture let’s
browse the top-level details of how it works.
Image layer - each image is built out of layers. Images are usually built by
running commands or adding/modifying files (using a Dockerfile). Each
step that is run in order to build an Image is an image layer. Docker saves
each layer, so when you run a build next time, it is able to reuse the layers
that did not change. Layers are shared between all images so if two
images start with similar steps, the layers are shared between them. You
can see this illustrated below.
Go to Table of Contents 52
Fig. 18: You can use https://imagelayers.io/ to analyze Docker image layers and compare
them to each other. For example: ruby, python, node images share five layers - this means
that if you download all three images the first 5 layers will be downloaded only once.
As you can see, all compared images share common layers. So if you
download one of them, the shared layers will not be downloaded and
stored again when downloading a different image. In fact, changes in a
running container are also seen as an additional, uncommitted layer.
Registry - a place where images and image layers are kept. You can build
an image on your CI server, push it to a registry and then use the image
from all of your nodes without the need to build the images again.
Go to Table of Contents 53
VM vs. Container
What Docker does differently is directly using the host system (no need
for Hypervisor and Guest OS), it runs the containers using several features
of the Linux kernel that allow them to securely separate the processes
inside them. Thanks to this, a process inside the container cannot
influence processes outside of it. This approach makes Docker more
lightweight both in terms of CPU/Memory usage, and disk space usage.
Hypervisor
Infrastructure
Go to Table of Contents 54
App 1 App 2 App 3
Docker Engine
Operating System
Infrastructure
Fig. 19: Similar features, different architecture - Virtualization vs, Dockerization. Docker,
leverages containerization - lightweight abstraction layer between application and the
operating system / hardware. It separates the user processes but without running the
whole operating system/kernel inside the container.
Ok, so we have the technical introduction covered. Now let’s see how
Docker helps to build, run and maintain a Microservice oriented
application.
Development
Development is usually the first phase where Docker brings some extra
value, and it is even more helpful with Microservice oriented applications.
As mentioned earlier, Docker comes with tools that allow us to orchestrate
a multi-container setup in a very easy way. Let's take a look at the benefits
Docker brings during development.
Go to Table of Contents 55
Easy setup - low cost of introducing new developers
You only need to create a Docker configuration once and then each new
developer on the team can start the project by executing a single
command. No need to configure the environment, just download the
project and run docker-compose up. That's all!
This might seem too good to be true but I have a good, real-life example
of such a situation. I was responsible for a project where a new front-end
developer was hired. The project was written in a very old PHP version
(5.3) and had to be run on CentOS. The developer was using Windows
and he previously worked on Java projects exclusively. I had a quick call
with him and we went through a couple of simple steps: downloading and
installing Docker, cloning the git repository and running docker-compose.
After no more than 30 minutes he had a perfectly running environment
and was ready to write his first lines of code!
Go to Table of Contents 56
service_x_elastic:
image: elasticsearch:5.2.2
service_y_elastic:
image: elasticsearch:2.4.4
Testing if the application scales is pretty easy with Docker. Of course, you
won't be able to make some serious load testing on your local machine,
but you can test if the application works correctly when a service is scaled
horizontally. Horizontal scalability usually fails if the Microservice is not
stateless and the state is not shared between instances. Scaling can be
very easily achieved using docker-compose:
After running this command there will be four containers running the
same service_x. You can (and you should) also add a separate container
with a load balancer like HAProxy in front of them. That's it. You are ready
to test!
Go to Table of Contents 57
Continuous Integration
Pre-production
When you have your development setup up and running, it is also quite
easy to push your application to a staging server. In most projects I know,
this process was pretty straight-forward and required only a few changes.
The main difference is in the so called volumes - files/directories that are
shared between your local disk and the disk inside a container. When
developing an application, you usually setup containers to share all
project files with Docker so you do not need to rebuild the image after
each change. On pre-production and production servers, project files
should live inside the container/image and should not be mounted on
your local disk.
The other common change applies to ports. When using Docker for
development, you usually bind your local ports to ports inside the
container, i.e. your local 8080 port to port 80 inside the container. This
makes it impossible to test scalability of such containers and makes the
URI look bad (no one likes ports inside the URI).
Go to Table of Contents 58
So when running on any production or pre-production servers you usually
put a load balancer in front of the containers.
There are many tools that make running pre-production servers much
easier. You should definitely check out projects like Docker Swarm,
Kubernetes and Rancher. I really like Rancher as it is easy to setup and
really easy to use. We use Rancher as our main staging management tool
and all co-workers really enjoy working with it. Just to give you a small
insight into how powerful such tools are: all our team members are able
to update or create a new staging environment without any issues - and
within a few minutes!
Production
One important thing you should keep in mind when going with Docker on
production - monitoring and logging. Both should be centralized and
easy to use.
Cons
Docker has some downsides too. The first one you might notice is that it
Go to Table of Contents 59
takes some time to learn how to use Docker. The basics are pretty easy to
learn, but it takes time to master some more complicated settings and
concepts. The main disadvantage for me is that it runs very slowly on
MacOS and Windows. Docker is built around many different concepts
from the Linux kernel so it is not able to run directly on MacOS or
Windows. It uses a Virtual Machine that runs Linux with Docker.
Summary
Go to Table of Contents 60
Let me start with a couple of quotes that might be helpful for you to
understand what serverless is about:
— AUTH0.COM
— SERVERLESS.COM
— MARTINFOWLER.COM
As you can see, each of the quotes looks at serverless from a totally
different perspective. This does not mean that some of the quotes are
better, I think that all describe serverless in a very good way.
Go to Table of Contents 61
• Serverless is a lie. The truth is that servers are still there, Ops are also
there. So why is this called „serverless” - it’s called so because you, as a
business or as a developer, do not need to think about servers or ops.
They are hidden behind an abstraction that makes them invisible to you.
Both servers and ops are managed by a vendor like Amazon, Google,
Microsoft, etc.
Serverless providers
Go to Table of Contents 62
The most notable feature is that you can use your Docker images to run
as functions. A meaningful use-case of IBM OpenWhisk is a DarkVision
Application20, which shows how that technology can be used with
techniques like Visual Recognition, Speech to Text and Natural Language
Understanding.
In the next sections, we’ll use AWS Lambda for all of the examples, but
the core concepts remain the same across all of the serverless providers.
FaaS
In an FaaS approach, developers are writing code - and code only. They
do not need to care about the infrastructure, deployment, scalability, etc.
The code they write represents a simple and small function of the
application.
20 https://github.com/IBM-Bluemix/openwhisk-darkvisionapp
Go to Table of Contents 63
It is run in response to a trigger and can use external services:
External
Trigger service
Function
Fig. 20: Basic function as a service architecture consists of only two elements: the function
to be run and a trigger to listen for. Usually the function is also connected to third-party
services like a database.
Go to Table of Contents 64
Each function should comply with the following rules:
• It should not access the disk - AWS allows using a temporary /tmp
directory that allows storing 512MB of data.
• Concise - your function should not take too long to run (usually
seconds, but up to 300 seconds for AWS Lambda).
Once you have such a function, you just upload it to your service provider
and provide some basic configuration. From that moment, on each action
configured as a trigger, your function will be executed. The service
provider tracks how long it takes for your function to execute, and
multiplies the time by the amount of RAM configured (that's a limit you
can change). You pay for GB-seconds of execution. This means that if your
function is not executed, you do not pay anything and if your function is
executed thousands of times during one day, you pay only for the
GB-seconds your function took to run. There are no charges for scaling or
idle time.
Go to Table of Contents 65
the free tier that Amazon offers. Each month you get 3,200,000 seconds
to run a function with a 128MB memory limit for free. That’s over 890h -
over 37 days!
I think the calculations above clearly show that you can gain a huge
benefit by moving some parts (or all parts) of your application to a FaaS
provider. You get the scalability and ops for free, as you do not need to
take care of it.
Architecture
On the back-end, you can start with a very simple architecture where the
function is triggered by an API call and then connects to a DynamoDB
instance (or any other on premise data source like MongoDB, MySQL) to
fetch/modify some data. Then, you can apply direct read access to some
data in your DynamoDB and allow clients to fetch the data directly,
Go to Table of Contents 66
but handle all data-modifying requests using your function. You can also
introduce Event Sourcing very easily by having one function that records
an event and other functions that take the event in order to refresh your
read model.
You can also use FaaS to implement batch processing: split the stream of
data into smaller chunks and then send them to another function that will
run multiple instances of itself simultaneously. This allows you to process
the data much faster. FaaS is often used to do real-time log processing.
It’s easy!
Just a quick „hello world” example to show you how easily you can start
writing serverless applications:
Summary
Benefits
FaaS is easy to learn and implement, and it allows you to reduce the time
to market. It also allows you to reduce costs, and to scale easily. Each
function you write fits easily into a sprint, so it is easy to write serverless
applications in agile teams.
Go to Table of Contents 67
Drawbacks
There might be a small vendor lock-in if you do not take this into
consideration and do not introduce proper architecture. You should be
aware of the communication overhead that is added by splitting the app
into such small services. The most common issues mentioned are
multitenancy (the same issue as with running containers on Amazon) and
cold start - when scaling up, it takes some time to handle the first request
by a new container. It might also be a bit harder to test such an
application.
Good use-cases
• Mostly static pages, including eCommerce; You can host static content
on a CDN server or add cache in front of your functions.
• Other cases when your application is not fully using the server capacity
or you need to add scalability without investing much time and money.
Go to Table of Contents 68
Continuous Deployment
Just imagine that each of your microservices needs to be first built and
then deployed manually, not even mentioning running unit tests or any
kind of code-style tools. Having tens of those would be extremely
time-consuming and would often be a major bottleneck in the whole
development process.
Here comes the idea of Continuous Deployment - the thing that puts the
workflow of your whole IT department together. In Continuous
Deployment we can automate all things related with building Docker
containers, running unit and functional tests and even testing
performance of newly built services. At the end, if everything passes -
nothing prevents us from automatically deploying working solutions into
production.
The most commonly used software that handles the whole process is
Jenkins, Travis CI, Bamboo or CircleCI. We’ll show you how to do it using
Jenkins.
Going from the big picture, a common pipeline could look like this:
GitHub Jenkins
Go to Table of Contents 69
Most of the hard work is done by that nice looking guy, called Jenkins.
When someone pushes something to our Git repository (e.g. Github), the
webhook triggers a job inside our Jenkins instance. It can consist of the
following steps:
After all this, we can set up a Slack notification that will inform us of
success or failure of the whole process. The important thing is, that we
should keep our Jenkins instance clean, so running all of the unit tests
should be done inside a Docker container.
Once we have the idea of our build process, we can code it using the
Jenkinsfile. It’s a file that describes our whole deployment pipeline. It
consists of stages and build steps. Mostly, at the end of the pipeline we
include post actions that should be fired when the build was successful or
failed.
We should keep this file in our application’s code repository - that way
developers can also work with it, without asking DevOps for changes in
the deployment procedure.
Go to Table of Contents 70
Here is a sample Jenkinsfile built on the basis of the previously mentioned
steps. As we can see, the final step is to run another Jenkins job named
deploy. Jobs can be tied together to be more reusable - that way we can
deploy our application without having to run all of the previous steps.
#!groovy
pipeline {
agent any
stages {
stage('Build Docker') {
steps {
sh "docker build ..."
}
}
stage('Push Docker Image') {
steps {
sh 'docker push ...'
}
}
stage('Deploy') {
steps {
build job: 'deploy'
}
}
}
post {
success {
slackSend color: 'good', message: "Build Success"
}
failure {
slackSend color: 'danger', message: "Build Failed"
}
}
}
Go to Table of Contents 71
Related
technologies
Go to Table of Contents 72
Microservices based eCommerce platforms
There are major open-source platforms that were built using the
Microservices approach by design. This section tries to list those that we
think could be used as a reference for designing your architecture - or
even better - could be used as a part of it.
Sylius
Let’s say we need to have two services for handling a Product Catalog and
Promotions, respectively. The solution would be to take the two
components and use them to develop two standalone applications.
Before Sylius, you would have needed to write everything from scratch or
strip the functionality from an existing eCommerce software.
Go to Table of Contents 73
Spryker
Elasticsearch
Mobile
YVES SDK
Shop
KV Storage front end
RPC
Payment
Mail
BI
ZED
PIM Business
Back end
ETL Intelligence
ERP
Fig. 14: Backend for frontends architecture is about minimizing the number of backend
calls and optimizing the interfaces to a supported device.
Open Loyalty leverages the CQRS and Event Sourcing design patterns.
You can use it as a headless CRM leveraging a REST API (with JWT based
authorization).
We’ve seen many cases of Open Loyalty being used as CRM and
marketing automation.
ON-LINE INTERNAL
Client Cockpit
eCommerce
ERP
eCommerce Cockpit
Customer Admin
Mobile Cockpit*
Admin Cockpit
OFF-LINE
SaaS
POS
OFF-LINE
DATA
POS cockpit Marketing Automation
Old Components
Fig. 23: Open Loyalty architecture - each application works as separate service.
75
The platform is open source and you can find the code on Github
(https://github.com/DivanteLtd/open-loyalty).
More information: http://openloyalty.io.
22 https://www.pimcore.org/docs/latest/Web_Services/index.html
Go to Table of Contents 76
Technologies that empower the microservices
architecture
We’ll show you some of the most widely used tools and technologies that
could empower your development by making things easier, more
automated and are very suitable when diving into Microservices.
Ansible
23 https://pl.wikipedia.org/wiki/DevOps
Go to Table of Contents 77
Ansible composes each server (or group of them, named inventory) from
reusable roles. We can define ours, such as Nginx, PHP or Magento, and
then reuse them for different machines. Roles are next tied together in
“Playbooks” that describe the full deployment process.
To configure our first servers with the Nginx web server and PHP, we
should first create two roles that will be next used in a final Playbook.
1. Nginx:
# in ./roles/nginx/tasks/main.yml
- name: Ensures that nginx is installed
apt: name=nginx state=present
Go to Table of Contents 78
2. PHP:
# in ./roles/php/tasks/main.yml
- name: Ensures that dotdeb APT repository is added
apt_repository: repo="deb http://packages.dotdeb.org
jessie all" state=present
Having these roles, we can now define a playbook that will combine them
to set-up our new server with nginx and php installed:
# in ./php-nodes.yml
- hosts: php-nodes
roles:
- nginx
- php
Go to Table of Contents 79
The last thing we need to do is to tell Ansible the hostnames of our
servers:
# in ./inventory
[php-nodes]
php-node1.acme.org
php-node2.acme.org
Deployment is now as easy as typing a single shell command that will tell
Ansible to run the php-nodes.yml playbook on hosts from the inventory
file as root (-b):
ReactJS
Go to Table of Contents 80
With React, you compose your application out of components. It
embraces what is called component-based architecture - a declarative
approach to developing web interfaces where you describe your UI with a
tree of components. React components are highly encapsulated,
concern-specific, single-purpose blocks. For example, there could be
components for address or zip code that together create a form. Such
components have both a visual representation and dynamic logic.
Some components can even talk to the server on their own, e.g., a form
that submits its values to the server and shows confirmation on success.
Such interfaces are easier to reuse, refactor, test and maintain. They also
tend to be faster than their imperative counterparts as React - being
responsible for rendering your UI on screen - performs many
optimisations and batches updates in one go.
It’s most commonly used with Webpack - a module bundler for modern
Javascript. One of its features - code-splitting - allows you to generate
multiple Javascript bundles (entry points) allowing you to send clients
only the part of Javascript that is required to render that particular screen.
NodeJS
Go to Table of Contents 81
can be used server-side and in CLI environments. There are plenty of
JavaScript Web frameworks available, like Express
(https://expressjs.com/) and HapiJS (https://hapijs.com/) - to name but
two. As NodeJS is built around Google’s V8 JavaScript engine (initially
developed as Chrome/Chromium JS engine) it’s blazingly fast. Node
leverages the events-polling/non-blocking IO architecture to provide
exceptional performance results and optimizes CPU utilization (for more,
read about the c10k problem: http://www.kegel.com/c10k.html).
Node.js Server
Request
POSIX
Event Non-
Request Async
Loop blocking
Threads
IO
Delegate
Requests
Single
Requests Thread
Fig. 24: Node.js request flow. Node leverages Event polling and maximizing the memory
and CPU usage on running parallel operations inside single threaded environment.
NodeJS is used as a foundation for many CLI tools - starting from the
most popular “npm” (Node Package Manager), followed by a number of
tools like Gulp, Yeoman and others.
Go to Table of Contents 82
JavaScript is the rising star of programming languages. It can even be
used for building desktop applications - like Visual Studio Code or Vivaldi
web browser (!); these tools are coded in 100% pure JavaScript - but for
the end users, nothing differs from standard desktop applications. And
they’re portable between operating systems by default!
On the server side, NodeJS is very often used as an API platform because
of the platform speed. The event polling architecture is ideal for rapid but
short-lived requests.
Using “npm” one can install almost all available libraries and tools for the
JS stack - including frontend and backend packages. As most modern
libraries (eg. GraphQL, Websockets) have Node bindings, and all modern
cloud providers support this technology as well, it’s a good choice for
backend technology backing microservices.
24 https://www.paypal-engineering.com/2013/11/22/node-js-at-paypal/
Go to Table of Contents 83
LinkedIn
One reason was scale. The second is, if you look at Node, the
thing it’s best at doing is talking to other services.
eBay
Uber
https://nodejs.org/static/documents/casestudies/Nodejs-at-Uber.pdf
Netflix
http://thenewstack.io/netflix-uses-node-js-power-user-interface/
Groupon
http://www.datacenterknowledge.com/archives/2013/12/06/need-speed-groupon-m
igrated-node-js/
25 http://venturebeat.com/2011/08/16/linkedin-node/
26 http://www.ebaytechblog.com/2013/05/17/how-we-built-ebays-first-node-js-application/
Go to Table of Contents 84
Swagger
This powerful tool is too commonly used only for generating nice-looking
documentation for APIs. Basically, swagger is for defining the API
interfaces using simple, domain-driven JSON language.
The editor is only one tool from the toolkit but other ones are:
• Codegen - for generating the source code scaffold for your API -
available in many different languages (Node, Ruby, .NET, PHP).
• UI - most known swagger tool for generating useful and nice looking
interactive documentation.
Everything starts with a specification file describing all the Entities and
interfaces for the REST API. Please take a look at the example below:
{
"get": {
"description": "Returns pets based on ID",
"summary": "Find pets by ID",
"operationId": "getPetsById",
"produces": [
"application/json",
"text/html"
],
"responses": {
"200": {
"description": "pet response",
"schema": {
Go to Table of Contents 85
"type": "array",
"items": {
"$ref": "#/definitions/Pet"
}
}
},
"default": {
"description": "error payload",
"schema": {
"$ref": "#/definitions/ErrorModel"
}
}
}
},
"parameters": [
{
"name": "id",
"in": "path",
"description": "ID of pet to use",
"required": true,
"type": "array",
"items": {
"type": "string"
},
"collectionFormat": "csv"
}
]
}
$ref relates to other entities described in the file (data models, structures
etc). You can use primitives as the examples and return values (bool,
string…) as well as hash-sets, compound objects and lists. Swagger allows
you to specify the validation rules and authorization schemes (basic auth,
oauth, oauth2).
Go to Table of Contents 86
{
"oauth2": {
"type": "oauth2",
"scopes": [
{
"scope": "email",
"description": "Access to your email address"
},
{
"scope": "pets",
"description": "Access to your pets"
}
],
"grantTypes": {
"implicit": {
"loginEndpoint": {
"url":
"http://petstore.swagger.wordnik.com/oauth/dialog"
},
"tokenName": "access_token"
},
"authorization_code": {
"tokenRequestEndpoint": {
"url":
"http://petstore.swagger.wordnik.com/oauth/requestToken",
"clientIdName": "client_id",
"clientSecretName": "client_secret"
},
"tokenEndpoint": {
"url":
"http://petstore.swagger.wordnik.com/oauth/token",
"tokenName": "access_code"
}
}
}
}
}
Go to Table of Contents 87
Last but not least swagger the OpenAPI specification format has become
more and more a standard and should be considered when starting new
API projects. It’s supported by many external tools and platforms -
including Amazon API Gateway27.
Fig. 25: Swagger UI generates a nice-looking specification for your API along with a
“try-it-out” feature for executing API calls directly from the browser.
Elasticsearch
27 https://m.signalvnoise.com/the-majestic-monolith-29166d022228#.90yg49e3j
Go to Table of Contents 88
Elasticsearch supports full-text search with faceted filtering and support
for most major languages with stemming and misspelling correction
features.
Elasticsearch is even used for log analysis with tools like Kibana and
Logstash. With its ease of use, performance and scalability characteristics,
it is actually best choice for most eCommerce and content related sites.
GraphQL
Widely used REST APIs are organized around HTTP endpoints. GraphQL
APIs are different; they are built in terms of types and fields, and relations
between them. It gives clients the ability to ask for what they need directly
instead of many different REST requests. All the necessary data will be
queried and returned with a single call.
Go to Table of Contents 89
Data definition:
type Project {
name: String
tagline: String
contributors: [User]
}
Sample query:
{
project(name: "GraphQL") {
tagline
}
}
Query result:
{
"project": {
"tagline": "A query language for APIs"
}
}
Go to Table of Contents 90
Distributed logging and monitoring
Graylog
With distributed services you have to track a whole bunch of new metrics:
To make it even worse, you must track all those parameters across several
clusters in real time. Without such a level of monitoring, no high
availability can be achieved and the distributed system is even more
vulnerable to downtime than a single monolithic application.
The good news is that nowadays there are plenty of tools to measure
web-app performance and availability. One of the most interesting is
Graylog (http://graylog.org).
Go to Table of Contents 91
Fig. 26: In graylog you’ve got access to messages in real time with alerts configured for
each separate message stream.
Fig. 27: Alerts configuration is a basic feature for providing HA to your microservices
ecosystem.
29 http://www.fluentd.org/
Go to Table of Contents 92
Distributed systems require new levels of application monitoring and
logging. With monolithic applications you can track one log-file for events
(usually) and use some Zabbix triggers to get a complete view of a
server's state, application errors, etc.
New Relic
New Relic works as a system daemon with native libraries for many
programming languages and servers (PHP, NodeJS…). Therefore, it can
be used even in production where most other debugging tools come with
too much significant overhead. We used to work with New Relic on
production clusters - even with applications with millions of unique users
per month and dozens of servers in a cluster.
Go to Table of Contents 93
Fig. 28: The coolest feature of New Relic is stack-trace access - on production, in real time.
Go to Table of Contents 94
New Relic Insights
Fig. 29: New Relic Insights Data Explorer with sample plot.
Go to Table of Contents 95
New Relic Insights NRQL Language
You can also use the NRQL (New Relic Query Language) with syntax
similar to SQL language to explore all collected data and create
application metric reports.
For example, you can attach customer group IDs to order requests to
check if particular customer groups have an unusually bad experience
during the order process.
Go to Table of Contents 96
Take care of the front-end using New Relic Browser
Another powerful feature allows you to easily detect any javascript issue
on the front-end of your application. Additionally, New Relic will show you
a detailed stack trace and execution profile.
Fig. 31: The New Relic Browser module displays a list of javascript issues on front-end
application.
Go to Table of Contents 97
Case Studies:
Re-architecting
the monolith
Go to Table of Contents 98
Case Studies: Re-architecting the Monolith
Here I’ll briefly present two case studies of the microservices evolution
which I’ve been able to observe while working at Divante.
B2B
• CRM that became the SPoF (Single Point of Failure). Pivotal CRM was in
charge of too many key responsibilities including per-customer pricing,
cart management and promotions.
Go to Table of Contents 99
The architecture of this system resembled a "death star". However, its
complexity was not between microservices, but between external
systems.
The first instinct was to move the site 1:1 from legacy Magento 1 to a new
platform. OroCommerce and Magento 2 were considered.
• New features.
Pros:
Cons:
• It’s still a monolithic application that sooner or later will lead us to the
starting point - problems with scalability and maintenance.
A New approach
Microservices
Microservice X Microservice Y Microservice Z ...
implementation
implementation implementation implementation
team
Magento Microservice X
developers adoption on ...
team Magento
The first step of the "architecture analysis" process was the development
of a high-level architecture of the entire system by a team of architects,
focused on service responsibilities. The results of their work included:
The architects worked together along with the client. The client’s domain
experts were engaged in session-based workshops using the event
http://ziobrando.blogspot.com/2013/11/introducing-event-storming.html.
Based on the collected data, the team provided the implementation team
with complete documentation.
We decided to start by implementing the first service that is critical for the
system due to its SPoF and which would give us the best performance
results: PRICING and PIM.
It was crucial to figure out how to separate the platform from Pivotal CRM
for calculating end-client product prices and therefore to avoid a SPoF
and maintain High Availability (initially the platform used real-time
WebService calls to get the prices from the CRM when users entered the
page).
PIM was selected to solve problems with growing the SKUs database by
moving to an ElasticSearch NoSQL solution instead of Magento’s EAV
model.
Beginning
IT Systems
MAGENTO
IT Legacy systems
Micro Services
MAGENTO
PRICE
PIM
WMS ...
XYZ Client
ERP PIM WMS ...
API Gateway
Micro Services
Message Broker
PRICE CRM OMS
...
We haven’t decided (at this point) to go with any technology other than
PHP, so all services were implemented using the Symfony framework;
mostly for simplicity, as well as cost optimization of the development
process.
You can find more great technologies that focus on microservices later in
this book and at https://github.com/mfornos/awesome-microservices.
To sum-up our challenge please find our notes on the pros and cons of
the microservice approach below:
Pros:
• Small teams can work in parallel to create new, and maintain current,
services. Many of you have probably experienced problems with working
in large teams, as we did.
4 https://www.nginx.com/blog/event-driven-data-management-microservices/
109
• Scalability - we can scale only the services that require it.
• Programmers have a lot of fun, so it’s quite easy to keep the team
motivated.
Cons:
Mobile Commerce
Magento checkout was integrated using API REST calls for placing orders.
Nowadays, all new open source products (and of course, not just
open-source) expose most of their features via API. It’s cool to focus on
the end client’s value (frontend) and not reinvent the wheel on the
backend.
5
http://pimcore.org - Enterprise grade Content Management platform, PIM and DAM
After the unspeakable NoSQL hype of about two years ago had reached
its peak “Why are you still working with relational databases?”, the topic
of microservices was brought to the fore in discussions about back-end
technologies. In addition, with React, Node & Co., the front-enders have
developed quite a unique little game that, it seems, nobody else can see
through. After about two years of Spryker, I have had the pleasure of
being able to follow these technical discussions first-hand. During my
time with the mail order giant Otto Group, there was another quite clearly
defined technical boogeyman — the so-called Host System, or the AS400
machines, which were in use by all the main retailers. Not maintainable,
ancient, full of spaghetti code, everything depended on it, everything
would be better if we could be rid of it and so on and so forth — so I was
told. On the other side were the business clowns — I’m one, too — for
whom technology was just a means to an end. Back then, I thought those
who worked in IT were the real hard workers, pragmatic thinkers, who only
answered to the system and whose main goal was to achieve a high level
of maintainability. Among business people there were, and there still are,
those I thought only busied themselves with absurd strategies and who
banged on about omnichannel, multi-channel, target group shops and
the like. Over the last eight years of Kassenzone.de, these strategies were
always my self-declared final boss. It was my ultimate aim to disprove
them and demonstrate new approaches.
This does sound quite promising and it can also help with the
corresponding problems. Otto’s IT team has already reached the
Champions League where this is concerned and produced the obligatory
article, called “On Monoliths and Microservices” on the subject. Guido
from Otto also referred to this topic at the code.talks event:
There are also other examples which benefit excellently from this
approach. Zalando is an example of a company which is open about using
it in “From Jimmy to Microservice”. The approach can also crop up for
quickly growing tech teams, such as that of Siroop.
The second example is quicker, that goes without saying. With larger
models, at which scaling is a rather uniform approach, the first example is
better. Not much is different in the case of microservices. To understand
this context better, Werner Vogels’ (Amazon CTO) test on his Learnings
with AWS30 is highly recommended:
30https://www.thoughtworks.com/insights/blog/monoliths-are-bad-design-and-you-know-it
³¹https://www.thoughtworks.com/insights/blog/monoliths-are-bad-design-and-you-know-it
³²http://blog.cleancoder.com/uncle-bob/2014/10/01/CleanMicroserviceArchitecture.html
117
Technically and methodically, a lot is said for the use of “good”
monolithic structures for a great deal of eCommerce companies, but
doing so requires a lot of effort producing good code, something which,
in the short term, you don’t have to do in the microservices world. If, then,
a mistake in the scaling arises, the affected CTOs would probably wish
they had the AS400 system back.
The founder of Basecamp has hit the nail on the head with his own
system, which he describes as “The Majestic Monolith”33. And, where
content is concerned, I’m with him:
It’s bit like if companies who own an old bicycle, which they don’t know
how to ride properly, want a little too much. They see the unicyclist at the
circus performing dazzling tricks on his unicycle and say to themselves:
My bike is too old, that’s why I can’t ride it. I’ll just start with a unicycle
right away, at least that’s forward-thinking.
³³ https://m.signalvnoise.com/the-majestic-monolith-29166d022228#.90yg49e3j
There are plenty of websites, blogs and books you can check to read
more about microservices and related architectural patterns. The book
“Building Microservices, Designing Fine-Grained Systems” by Sam
Newman and O’Reilly Media
(http://shop.oreilly.com/product/0636920033158.do) should be at the
top of the top of your list. The most important information has been
collected into one place. It is all you need to know to model, implement,
test and run new systems using microservices or transform the monolith
into a distributed set of smaller applications. A must-have book for every
software architect. O’Reilly Media has also released another interesting
book, “Microservice Architecture” by Irakli Nadareishvili, Ronnie Mitra,
Matt McLarty and Mike Amundsen
(http://shop.oreilly.com/product/0636920050308.do), which is also worth
a read.
As you can see, knowledge is all around us. Don’t forget about Martin
Fowler and his “Microservice Resource Guide”
(https://martinfowler.com/microservices/). Martin is Chief Scientist at
ToughtWorks, the publisher of “Technology Radar”
(https://www.thoughtworks.com/radar; highly recommended as well) and
author of a few bestselling books. Martin Fowler’s wiki is a Mecca for
software architects and “Microservice Resource Guide” is only one of
them…
www.divante.co