Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit e799e19

Browse files
Revert "Delete part3.md"
This reverts commit 5cd265b.
1 parent 25b0aff commit e799e19

File tree

1 file changed

+138
-0
lines changed

1 file changed

+138
-0
lines changed

getting-started/part3.md

Lines changed: 138 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,138 @@
1+
---
2+
title: "Getting Started, Part 3: Stateful, Multi-container Applications"
3+
---
4+
5+
In [Getting Started, Part 2: Creating and Building Your App](part2.md), we
6+
wrote, built, ran, and shared our first Dockerized app, which all fit in a
7+
single container.
8+
9+
In part 3, we will expand this application so that it is comprised of two
10+
containers running simultaneously: one running the web app we have already
11+
written, and another that stores data on the web app's behalf.
12+
13+
## Understanding services
14+
15+
In a world where every executable is running in a container, things are very
16+
fluid and portable, which is exciting. There's just one problem: if you run
17+
two containers at the same time, they don't know about each other. Each
18+
container is isolated from the host environment, by design -- that's how Docker
19+
enables environment-agnostic deployment.
20+
21+
We need something that defines some connective tissue between containers, so
22+
that they run at the same time, and have the right ports open
23+
so they can talk to each other. It's obvious why: having a front-end application
24+
is all well and good, but it's going to need to store data at some point,
25+
and that's going to happen via a different executable entirely.
26+
27+
In a distributed application, these different pieces of the app are called
28+
"services." For example, if you imagine a video sharing site, there will
29+
probably be a service for storing application data in a database, a service
30+
for video transcoding in the background after a user uploads something, a
31+
service for the front-end, and so on, and they all need to work in concert.
32+
33+
The easiest way to organize your containerized app into services using
34+
is using Docker Compose. We're going to add a data storage service
35+
to our simple Hello World app. Don't worry, it's shockingly easy.
36+
37+
## Your first `docker-compose.yml` File
38+
39+
A `docker-compose.yml` file is a YAML markup file that is hierarchical in
40+
structure, and defines how multiple Docker images should work together when
41+
they are running in containers.
42+
43+
We saw that the "Hello World" app we created looked for a running instance of
44+
Redis, and if it failed, it produced an error message. All we need is a running
45+
Redis instance, and that error message will be replaced with a visitor counter.
46+
47+
Well, just as we grabbed the base image of Python earlier, we can grab the
48+
official image of Redis, and run that right alongside our app.
49+
50+
Save this `docker-compose.yml` file:
51+
52+
{% gist johndmulhausen/7b8e955ccc939d9cef83a015e06ed8e7 %}
53+
54+
Yes, that's all you need to specify, and Redis will be pulled and run. You could
55+
make a `Dockerfile` that pulls in the base image of Redis and builds a custom
56+
image that has all your preferences "baked in," but we're just going to point to
57+
the base image here, and accept the default settings. (Redis documents these
58+
defaults on [the page for the official Redis
59+
image](https://store.docker.com/images/1f6ef28b-3e48-4da1-b838-5bd8710a2053)).
60+
61+
This `docker-compose.yml` file tells Docker to do the following:
62+
63+
- Pull and run [the image we uploaded to Docker Hub in step 2](/getting-started/part2/#/share-the-app) as a service called `web`
64+
- Map port 4000 on the host to `web`'s port 80
65+
- Link the `web` service to the service we named `redis`; this ensures that the
66+
dependency between `redis` and `web` is expressed, and these containers will
67+
run together in the same subnet.
68+
- Our service named `redis` just runs the official Redis image, so go get it from Docker Hub.
69+
70+
## Run and scale up your first multi-container app
71+
72+
Run this command in the directory where you saved `docker-compose.yml`:
73+
74+
```shell
75+
docker-compose up
76+
```
77+
78+
This will pull all the necessary images and run them in concert. Now when you
79+
visit `http://localhost:4000`, you'll see a number next to the visitor counter
80+
instead of the error message. It really works -- just keep hitting refresh.
81+
82+
## Connecting to containers with port mapping
83+
84+
With a containerized instance of Redis running, you're probably wondering --
85+
how do I break through the wall of isolation and manage my data? The answer is,
86+
port mapping. [The page for the official Redis
87+
image](https://store.docker.com/images/1f6ef28b-3e48-4da1-b838-5bd8710a2053)
88+
states that the normal management ports are open in their image, so you would
89+
be able to connect to it at `localhost:6379` if you add a `ports:` section to
90+
`docker-compose.yml` under `redis` that maps `6379` to your host, just as port
91+
`80` is mapped for `web`. Same with MySQL or any other data solution; once you
92+
map your ports, you can use your fave UI tools like MySQL Workbench, Redis
93+
Desktop Manager, etc, to connect to your Dockerized instance.
94+
95+
Redis port mapping isn't necessary in `docker-compose.yml` because the two
96+
services (`web` and `redis`) are linked, ensuring they run on the same host (VM
97+
or physical machine), in a private subnet that is automatically created by the
98+
Docker runtime. Containers within
99+
that subnet can already talk to each other; it's connecting from the outside
100+
that necessitates port mapping.
101+
102+
## Cheat sheet and recap: Hosts, subnets, and Docker Compose
103+
104+
You learned that by creating a `docker-compose.yml` file, you can define the
105+
entire stack for your application. This ensures that your services run
106+
together in a private subnet that lets them connect to each
107+
other, but only to the world as specifically dircted. This means that if you
108+
want to connect your favorite data management software to your data storage
109+
service, you'll have to ensure the container has the proper port exposed and
110+
your host has that port mapped to the container in `docker-compose.yml`.
111+
112+
```shell
113+
docker-compose up #Pull and run images specified in `docker-compose.yml` as services
114+
docker-compose up -d #Same thing, but in background mode
115+
docker-compose stop #Stop all running containers for this app
116+
docker-compose rm -f #Remove all containers for this app
117+
```
118+
119+
## Get ready to scale
120+
121+
Until now, I've been able to shield you from worrying too much about host
122+
management. That's because installing Docker always sets up a default way
123+
to run containers on that machine. Docker for Windows and Mac
124+
comes with a virtual machine host running a lighweight operating system
125+
we call Moby, which is just a very slimmed-down Linux. Docker for Linux
126+
just works without a VM at all. And Docker for Windows can even run Microsoft
127+
Windows containers using native Hyper-V support. When you've run `docker
128+
run` and `docker-compose up` so far, Docker has used these solutions
129+
to run your containers. That's because we want you to be able to install
130+
Docker and get straight to the work of development and building images.
131+
132+
But when it comes to getting your app into production, we all know that
133+
you're not going to run just one host machine that has Redis, Python, and
134+
all your other sevices. That won't scale. You need to learn how to run not
135+
just multiple containers on your local host, but multiple containers on
136+
multiple hosts. And that's precisely what we're going to get into next.
137+
138+
[On to "Part 4: Running our App in Production" >>](part4.md){: class="button darkblue-btn"}

0 commit comments

Comments
 (0)