docker: split structure into usefull networks
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy
, frontend
and backend
. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
- Reverse proxy (network: reverse-proxy):
- jwilder/nginx-proxy
- jrcs/letsencrypt-nginx-proxy-companion
- Database
- mongo:3.6.2
- Project 1
- one/frontend
- one/backend
- two/frontend
- two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail@my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
docker reverse-proxy docker-networking
add a comment |
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy
, frontend
and backend
. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
- Reverse proxy (network: reverse-proxy):
- jwilder/nginx-proxy
- jrcs/letsencrypt-nginx-proxy-companion
- Database
- mongo:3.6.2
- Project 1
- one/frontend
- one/backend
- two/frontend
- two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail@my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
docker reverse-proxy docker-networking
add a comment |
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy
, frontend
and backend
. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
- Reverse proxy (network: reverse-proxy):
- jwilder/nginx-proxy
- jrcs/letsencrypt-nginx-proxy-companion
- Database
- mongo:3.6.2
- Project 1
- one/frontend
- one/backend
- two/frontend
- two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail@my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
docker reverse-proxy docker-networking
I'm not quite sure about the correct usage of docker networks.
I'm running a (single hosted) reverse proxy and the containers for the application itself, but I would like to set up networks like proxy
, frontend
and backend
. The last one for project1, assuming there could be multiple projects at the end.
But I'm even not sure, if this structure is the way it should be done. I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
So this is my current working structure with only one network (bridge) - which doesn't make sense:
- Reverse proxy (network: reverse-proxy):
- jwilder/nginx-proxy
- jrcs/letsencrypt-nginx-proxy-companion
- Database
- mongo:3.6.2
- Project 1
- one/frontend
- one/backend
- two/frontend
- two/backend
So my first docker-compose looks like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:ro
- /var/run/docker.sock:/tmp/docker.sock:ro
nginx-letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nginx-letsencrypt
networks:
- reverse-proxy
depends_on:
- nginx-proxy
volumes:
- /var/docker/nginx-proxy/vhost.d:/etc/nginx/vhost.d:rw
- html:/usr/share/nginx/html
- /opt/nginx/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/var/run/docker.sock:rw
environment:
NGINX_PROXY_CONTAINER: "nginx-proxy"
mongodb:
container_name: mongodb
image: mongo:3.6.2
networks:
- reverse-proxy
volumes:
html:
networks:
reverse-proxy:
external:
name: reverse-proxy
That means I had to create the reverse-proxy before. I'm not sure if this is correct so far.
The project applications - frontend containers and backend containers - are created by my CI using docker commands (not docker compose):
docker run
--name project1-one-frontend
--network reverse-proxy
--detach
-e VIRTUAL_HOST=project1.my-server.com
-e LETSENCRYPT_HOST=project1.my-server.com
-e LETSENCRYPT_EMAIL=mail@my-server.com
project1-one-frontend:latest
How should I split this into useful networks?
docker reverse-proxy docker-networking
docker reverse-proxy docker-networking
edited Nov 16 '18 at 21:55
user3142695
asked Nov 13 '18 at 17:00
user3142695user3142695
1,6971039120
1,6971039120
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose
. Just specify the networks you want at the top level, just like you've done for reverse-proxy
:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run
. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml
above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run
, but you can use docker network connect <container> <network>
to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up
, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml
if you like, or use docker network create
and import them into your docker-compose
stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up
with all networks defined in thedocker-compose.yml
for each app container:
docker run
the containerdocker network attach
the right networks
add a comment |
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm usingjwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.
– user3142695
Nov 21 '18 at 17:03
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53286056%2fdocker-split-structure-into-usefull-networks%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose
. Just specify the networks you want at the top level, just like you've done for reverse-proxy
:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run
. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml
above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run
, but you can use docker network connect <container> <network>
to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up
, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml
if you like, or use docker network create
and import them into your docker-compose
stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up
with all networks defined in thedocker-compose.yml
for each app container:
docker run
the containerdocker network attach
the right networks
add a comment |
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose
. Just specify the networks you want at the top level, just like you've done for reverse-proxy
:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run
. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml
above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run
, but you can use docker network connect <container> <network>
to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up
, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml
if you like, or use docker network create
and import them into your docker-compose
stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up
with all networks defined in thedocker-compose.yml
for each app container:
docker run
the containerdocker network attach
the right networks
add a comment |
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose
. Just specify the networks you want at the top level, just like you've done for reverse-proxy
:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run
. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml
above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run
, but you can use docker network connect <container> <network>
to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up
, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml
if you like, or use docker network create
and import them into your docker-compose
stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up
with all networks defined in thedocker-compose.yml
for each app container:
docker run
the containerdocker network attach
the right networks
TL;DR; You can attach multiple networks to a given container, which let's you isolate traffic to a great degree.
useful networks
Point of context, I'm inferring from the question that "useful" means there's some degree of isolation between services.
I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
This is pretty simple with docker-compose
. Just specify the networks you want at the top level, just like you've done for reverse-proxy
:
networks:
reverse-proxy:
external:
name: reverse-proxy
frontend:
backend:
Then something like this:
version: '3.5'
services:
nginx-proxy:
image: jwilder/nginx-proxy
container_name: nginx-proxy
networks:
- reverse-proxy
ports:
- "80:80"
- "443:443"
volumes:
...
frontend1:
image: some/image
networks:
- reverse-proxy
- backend
backend1:
image: some/otherimage
networks:
- backend
backend2:
image: some/otherimage
networks:
- backend
...
Set up like this, only frontend1 can reach backend1 and backend2. I know this isn't an option, since you said you're running the application containers (frontends and backends) via docker run
. But I think it's a good illustration of how to achieve roughly what you're after within Docker's networking.
So how can you do what's illustrated in docker-compose.yml
above? I found this: https://success.docker.com/article/multiple-docker-networks
To summarize, you can only attach one network using docker run
, but you can use docker network connect <container> <network>
to connect running containers to more networks after they're started.
The order in which you create networks, run docker-compose up
, or run your various containers in your pipeline is up to you. You can create the networks inside the docker-compose.yml
if you like, or use docker network create
and import them into your docker-compose
stack. It depend on how you're using this stack, and that will determine the order of operations here.
The guiding rule, probably obvious, is that the networks need to exist before you try to attach them to a container. The most straightforward pipeline might look like..
docker-compose up
with all networks defined in thedocker-compose.yml
for each app container:
docker run
the containerdocker network attach
the right networks
answered Nov 19 '18 at 16:08
bluescoresbluescores
2,0311617
2,0311617
add a comment |
add a comment |
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm usingjwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.
– user3142695
Nov 21 '18 at 17:03
add a comment |
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm usingjwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.
– user3142695
Nov 21 '18 at 17:03
add a comment |
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
... would like to set up networks like proxy, frontend and backend. ... I think the backend should only be accessable for the frontend and the frontend should be accessable for the proxy.
Networks in docker don't talk to other docker networks, so I'm not sure if the above was in reference to networks or containers on those networks. What you can have is a container on multiple docker networks, and it can talk with services on either network.
The important part about designing a network layout with docker is that any two containers on the same network can communicate with each other and will find each other using DNS. Where people often mess this up is creating something like a proxy network for a reverse proxy, attaching multiple microservices to the proxy network and suddenly find that everything on that proxy network can find each other. So if you have multiple projects that need to be isolated from each other, they cannot exist on the same network.
In other words if app-a and app-b cannot talk to each other, but do need to talk to the shared proxy, then the shared proxy needs to be on multiple app specific networks, rather than each app being on the same shared proxy network.
This can get much more complicated depending on your architecture. E.g. one design that I've been tempted to use is to have each stack have it's own reverse proxy that is attached to the application private network and to a shared proxy network without publishing any ports. And then a global reverse proxy publishes the port and talks to each stack specific reverse proxy. The advantage there is that the global reverse proxy does not need to know all of the potential app networks in advance, while still allowing you to only expose a single port, and not have microservices connecting to each other through the shared proxy network.
answered Nov 19 '18 at 16:23
BMitchBMitch
60k10123144
60k10123144
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm usingjwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.
– user3142695
Nov 21 '18 at 17:03
add a comment |
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm usingjwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.
– user3142695
Nov 21 '18 at 17:03
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm using
jwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.– user3142695
Nov 21 '18 at 17:03
That sounds very logical. I am not very experienced with reverse proxies. As I said, I'm using
jwilder/nginx-proxy
as a reversed proxy - also to get the lets encrypt certificates managed. So I am not sure how to set up a second proxy as you explained. Could you please show an example? Maybe a docker-compose modification of mine? I don't understand how to setup a second proxy between my current proxy and the frontend of my project 1 for example.– user3142695
Nov 21 '18 at 17:03
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53286056%2fdocker-split-structure-into-usefull-networks%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown