Tfw finally figured out docker

>tfw finally figured out docker

Attached: 1 9hGvYE5jegHm1r_97gH-jQ.png (1240x992, 25K)

Other urls found in this thread:

github.com/moby/moby/blob/master/docs/rootless.md
github.com/jesseduffield/lazydocker?
portainer.io/
swarmpit.io/
grafana.com/grafana/dashboards/893
github.com/stefanprodan/dockprom
github.com/veggiemonk/awesome-docker
serversforhackers.com/t/containers
twitter.com/SFWRedditImages

Was it worth it?

Yes in the sense I can now understand better what people who use it say when they talk about it.
No in the sense I don't really need to use it, since I'm the only one using my machine and no one uses it at work.
From my shallow understanding, it's a bastard between python virtualenv and a small virtual machine.

it's fucking useless what is the point of it when you cannot have two containers sharing same port without additional service

Isn't that shit proprietary?

That's some freemium from what I understand, but I don't care about licenses for "professional" stuff.
I'm paid, I write stuff, I don't ask questions.
I "learned" it on my free time tho, which makes me a paid whore learning new tricks for free I guess.

>From my shallow understanding, it's a bastard between python virtualenv and a small virtual machine
>figured out
JUST

You can't share the same port if they are running on the same server dipshit. It's true for your physical machine as well. Also DOCKER IS NOT A VM, IT IS A CONTAINERIZED APPLICATION.

Then correct me if you're so smart. Where am I wrong in a first order approximation?
It's just a virtual machine that self destruct after completing it's task. And the way it is compartmentalized is similar to virtualenvs.

>Multiple interfaces don't exist

Wikipedia says it's licensed under apache. But there appears there are proprietary versions of it.

Doesn't matter how many interfaces you have, it's not a VM. Your physical machine can't have 2 applications listening in the same port. Ports don't listen on a certain interface, it listens on the OS.

It's not a VM, it's basically sandboxing an application. If you want an actual VM, use vagrant or vbox. Faggots here still don't understand the concept of docker, sheeesh

>It's not a VM, it's basically sandboxing an application.
Close enough

its completely OS
you can get paid shit like support etc

And don't sheesh at me you boomer.

I will retard, Iwill

Docker depends mainly on linux namespaces, a kernel feature that enable any normal linux process to see a customized view of the OS resoueces, for example you can override the network interfaces, file system, user id, PIDs, etc...

this makes any process to essentially virtualized but with running on the host OS/bare metal


When you pull a docker image, you actually download mostly a file system that the dockerized process see, it doesn't see your root or host file system, you can give it its own network interface inside a separate private network that runs inside your host OS, etc...

get it now?

>Ports don't listen on a certain interface
But they literally do. Technically an application listens on a socket, for standard TCP or UDP that means an IP:port combo. Commonly, one IP address maps to one interface.
>It's not a VM
I'm not sure how that's relevant here. Linux has so many different namespacing options that containers might as well be considered VMs for a lot of purposes. Virtual network interfaces are also a thing, leading back to my previous point.

Wat, this doesn't work with your OS anyhow, plus if you do want virtual interfaces you can use docker compose which still needs docker, so that is hardly making docker useless.

if I want to run two web applications in two docker containers on one machine I still need to configure reverse proxy outside docker to proxy traffic to them. So what is the point? I may as well ditch docker completely

setup Apache and nginx to both listen on port 443 using 2 interfaces and then come talk to me faggot. It won't work.

And even if you could pull that off (multiple interfaces) you're going to have to do some serious changes with iptables to where it would not be worth it...

So lets say you create an image with a tomcat, a java app and a database. As AFAIK Docker is not a VM, where in the host is the database stored? And can you access it from the host with a DB client? What happens if you try to access it while the image is running?

reverse proxy, you dumb nigger

You can do that with a VM you fucking retarded American

The point is to do away with the virtualization layer.

how do you think hosting SSL worked before SNI?

No, you can't retard. l2containerize

luckily for you Jow Forums is the last place on the internet that lets you brag about your stupidity and ignorance

It's stored on the filesystem inside the docker container.

You can't access it when the container isn't running unless you've used a docker volume to mount it onto the disk or it's own named volume.

Put your reverse proxy in a container, idiot

Woops, misread that.

You can access it from the host OS so long as you've exposed the port, or have a route from the host into the virtual network. Depends on how you configure it.

>filesystem inside host
So what do you see from the host then? A big file that only docker knows what it means?

Wrong. You can read and modify shared directories from the host

I didn't mention networking. Forget about network, just files. Where is the container data stored and how? In which format?

Didn't touch iptables
Clearly wildcard certificates lmaooo

Attached: Screenshot from 2019-07-29 16-59-32.png (1045x397, 16K)

>github.com/moby/moby/blob/master/docs/rootless.md
19.03 has rootless which is 10x safer. Linking to moby because the readme is more updated than docker-ce at the moment

The container has it's own filesystem following the Linux format. You have a base OS for the container (say Ubuntu) that is super trimmed down, but still has the same format. It's stored in what is ultimately a virtual hard disk that contains the whole OS, any packages you've configured and the configuration of the container. Depending on the config, it's basically a binary image of a drive.

Docker can interpret this container, can mount it and let you operate on it. When you create the container, you can create mount points. Lets say you want to have your config on the local disk so you can easily edit it, you can use -v /mnt/dockerconfig:/var/dockercontainerconfig and it will basically pass the directory straight through, so writes on the container OR the host are reflected.

The cool part about this is, the files on the Host OS persist, so if you destroy the container, you can simply restart it with the -v flag pointing to the same place, and the files will persist. If you don't -v them out, the files in the container are lost when the container is destroyed.

I mean, depends on how you configure it. You can use docker cp if you know where things are, but you do need to know.

I run postgres in a container. So my docker run command goes like

docker run --name Postgres96 -v ~/dockerdata/postgres:/var/lib/postgresql/data -d -p 5432:5432 postgres:9.6

This will give the docker container a name, so I can use the name in commands instead of a hash value, it will mount the internal Postgres config into my home directory in a dockerdata directory, it will bind port 5432 from the container to my host OS, -d to make it run in the background, and it will use version 9.6.latest of Postgres.

If they release a new version of Postgres (say I'm running 9.6.5 and they release 9.6.6), I can do the following

docker stop Postgres96
docker rm Postgres96
docker run --name Postgres96 -v ~/dockerdata/postgres:/var/lib/postgresql/data -d -p 5432:5432 postgres:9.6

This stops the container, deletes it, then recreates it using the above variables. What's cool about this is that it grabs 9.6 latest, so it will automatically pull 9.6.6 and start up with the new version, and my files intact.

If I ran it as
docker run --name Postgres96 -d -p 5432:5432 postgres:9.6

I'd lose all my files when I docker rm'ed it.

jfc people, WATCH A TUTORIAL and READ THE DOCUMENTATION
stop assuming things because you created a virtual machine that one time
you'll hate every piece of technology if you don't understand it, because then you lose data and do stupid shit like the retards in this thread and blame the tools

Docker is fucking sensational.

Makes updates easy, config files are easy to backup and manage. I can easily customise the containers to add/remove shit. I can automatically add containers to the load balancer, terminate SSL and set up load balancing, in 2 seconds flat. It's an awesome tech, but you need a use case that justifies it. There's fuck all benefit to running Docker unless you have something to use it on.

Just like owning a hammer..

He's not wrong in calling it a weak VM. OpenVZ instances are traditionally called VMs even though it's actually OS level virtualization, and I can't help but call LXC instances "VMs" as well.
At the end of the day, Docker is giving you a fake view of parts of the machine and environment in order to perform isolation. This is what virtualization does. Containers are... Virtualization, or paravirtualization if you wanna get autistic.
The only thing docker brings to the table is a popular and opinionated ecosystem around how instances are managed from setup to teardown. But you can't use much of that ecosystem since it's about as secure as NPM.

you should use a more specific tag like 9.6.14 and upgrade by using the new tag. But you don't need to give a shit if you're not running production workloads.

Yeah, also Postgres hasn't updated 9.6 in years, so I don't care. I was merely illustrating the point.

I use specific versions for certain things, but otherwise I'm happy for it to upgrade on run.

It does bring some more stuff. If you build your containers correctly, there are huge memory benefits to using it. There's also a lot of saved overhead from not running a full VM for each instance. Speed of spin up/down is improved, and you can start serious orchestration using Kubernetes or whatever.

That being said, strongly agree with your statement on security.

So then redpill me on these "docker" instances, let's say my application reaches 99% CPU usage will docker expand? (i.e. AWS Docker)

You need additional management layers like Kubernetes, but yeah, it can.

You can configure all kinds of rules for it to use to spin up and down. Just depends on your needs.

Anything you can do with an AWS template, you can basically do with Docker + Kubernetes. You can then run these containers in whichever cloud environment suits you best.

Docker just uses the underlying host. If you need more CPU, either add more CPU to the instance (eg aws instance type m5.large to m5.xlarge) or add more hosts. If you are using multiple hosts, you either mange them with configuration management (puppet/chef/salt/whatever) or use some sort of orchestration (aws ecs, docker swarm, kubernetes)

>huge memory benefits
>saved overhead
As OpenVZ and LXC are in the same class of virtualization (early Docker versions even used LXC as the virtual backend), they get the same low overhead, fast spin up/down, and shared memory benefits.
OpenVZ has been around since 2005, BSD Jails have been around since 2000, and they're also in the same category. This is not new technology, it's just been packaged in an interesting way and marketed HEAVILY.

It depends on your utilization and ability to parallelize. If the application can go horizontal across instances with load balancing on the front, then you can use an orchestration tool to automate bringing up and tearing down instances (like kubernetes, mentioned by , but there are others)
This is the benefit gained by the ecosystem I was talking about.

Say I create several apps and finish their docker compose file.
How do you suggest you keep track of the state of the containers? Should I use github.com/jesseduffield/lazydocker?
When I had several VMs in the past it was very tiresome.
I work with several different projects at the same time (during the whole day).

nice post user, can you explain chroot and compare it with docker?

chroot doesn't take advantage of the benefits the kernel has to offer.

What did it cost?

Everything

portainer.io/
or
swarmpit.io/
or
grafana.com/grafana/dashboards/893 (run this in containers with alertmanager setup)
or
github.com/stefanprodan/dockprom
you can probably find something here, too
github.com/veggiemonk/awesome-docker

I prefer portainer, personally.

Attached: 1504138614255.jpg (385x392, 25K)

that's called container orchestration, use google to search for it. subscribe and upvote

>tfw co worker gives you docker file and command to run and its all magic and you use it everyday for a year

Attached: 1296082274875.jpg (250x325, 37K)

I envy people who can just use something and not want/need to know how it works. I waste so much time trying to understand every part of a toolchain or dev environment when I should just copy paste and be done with it.

Docker and container are the two buzzwords of 2019.

>t. never used containers
it was a buzzword in 2013
docker sucks and its technologies were/are getting commoditized, but containers are here to stay because everybody is lazy and it solves that problem in a tolerable way

Are you on OSX? It works just fine for me on Linux and I know that it doesn't work on OSX because Docker for Mac is dumb.

>gets assigned the job to dockerize old code
>all I have is the source code
>no idea where to even start
Does the gradle (what the source code use to build) image even fucking work?

Attached: Blivet.gif (720x720, 36K)

serversforhackers.com/t/containers
watch the free courses
Docker in Development Part I
Docker in Development Part II

>gradle
What even is that used for other than android apps?

The source code I have is Java and the guy who wrote it use Gradle to build it. It's a server network monitoring application.

Attached: O RLY The Guy Who Wrote This Is Gone.jpg (500x700, 60K)

arent containers just basically app images with all dependencies included? sort of like an apk.

>figuring it out when it's outdated

Attached: file.png (865x221, 40K)

yes 'docker' is commoditized but podman wants you to use kubernetes shit to replace docker-compose, FUCK THAT
and that python script is a joke, give me an official podman-compose bruvv

> figured out docker
Congrats! Now go for Docker-compose, k8s, Rancher, rkt, god kill me, k3s, minikube, kubeless, whatever else they'll introduce next year.

You can do that on the same interface with alias IPs, apache listens to IP:port not Interface:port

>look up docker
>it's just a BSD jail
snoozefest

kek this. Also, containers weren't a new, or even relatively new, idea in 2013.

yeah it's the top layers that matter, and the newer ones that were created to orchestrate/coordinate shit
people know about jail, lxd/lxc, chroot, and all the things related to containers, but docker one upped them and now it got one upped back since we're heading to standardized parts across the industry
and there's a huge overlap of technologies in the lower levels

you'll end up liking gradle more than maven in the long run

I want to use a single container for LEMP development. Any good examples on how people are doing this? Something like

- ~/projects/docker-compose.yml
- ~/projects/docker/nginx/
- ~/projects/docker/php7/
- ~/projects/docker/php7.4/
- ~/projects/www/api1
- ~/projects/www/api2
- ~/projects/www/site1

I just want example docker-compose files, not some "windows tier" kind of project. Because I'm new to containers and I would like to see the best practices.

> >no idea where to even start
Pull OS container
Customize it, like 'RUN apt-get install program1 program2'
Create another container which will require the OS container and will contain application files along with a persistent volume

I don't know if a persistent volume will be allowed in production. The company is thinking of using docker to build the JAR and deploy onto different monitoring boxes. I gotta figure out all the details

>tfw finally dared to try cloud foundry

Attached: Cloud-Computing-Sharks.jpg (645x550, 186K)

If I need a lemp using the >exact< same packages as ubuntu 18.04 (to match production running in bare metal) I would have to use FROM ubuntu:18.04 and install everything inside a single container?

>LEMP
You should use more than one container for that. Docker-Compose got you covered (look for some examples, it's easy to setup).

yeah but this is a special case where I want the exact packages from 18.04
in this example shouldnt I just apt-get install everything? because I would be composing from the exact same base image
but I would get nodejs from another image and compose with that

One of the main points of using containers is that one proccess == one container (better isolation, not to be confused with better security). You can use ubuntu:18.04 and apt-install in top of that, and then use Docker-Compose to share networking resources between containers (bridge mode should be enough, after exposing the application port to the host).

If I learned docker enough to confidently put it in my resume what kind of job can I get ?

>resoueces
dropped.

thanks ill do that

Isn't it expensive to virtualize every process

More than running them natively, yes. The OS needs to allocate more resources than memory (network interface, filesystem, ...). Think of containers as a way to deploy anything without having to download any dependencies on your host machine and having the project running with a simple command. That makes it a powerful tool, because you can have a reproducible environment whenever is supported (Docker is not the only one, but it's the most known and the de facto standard).

if I wanted to really have a deep intuition for containers and know how they work, where would I go?

the docks.
union's been on strike, it's tough, so tough

The official docs are a great start. You can practise by making a database container (Postgres/MariaDB/MySQL), exposing the internal port, mounting a volume to make data persistent... Everything I said is just adding args to a command, but it lets you have a little grasp of it.
After that, look for "Dockerfile" to build your own containers, and Docker-Compose for multicontainer deployments.
If you want to take it a step further, look for "orchestration for containers" and how Kubernetes works.
Notice how this goes from the most simple level (not the lowest) to higher levels.

This, this is the main difference between docker and hyper virt

I'm glad I waited 6 years to learn Docker because most of the pieces are in place now. Rootless just got merged. Comfy.

devops or sysadmin

>sharing a port on the same machine
Okay, so a packet comes in over the network interface on the machine to port 1234, which docker container do you send it to?
This completely breaks how ports are supposed to work, just use a load balancer like a normal person.

Docker uses linux namespaces to create a custom view of the /proc, /sys, and /dev file systems and then chroots into the docker image to populate the rest of the file system (/bin, /usr, /home, /etc, ....).

chroot + tarball = docker

Why would I use a container? I want to practice through implementation on my homeserver.

- Reproducible environment that you can run in one command without downloading any dependencies in your OS, except for Docker.
- Continuous integration (from testing to deployment).

Those are the pros. Of course, such pros comes with cons:
- Containerized processes perform worse than native processes.
- It's a pain in the ass when you try to debug anything inside a container and you gotta try different networking configurations before getting it working as you'd like.

In the end, it's a tool that serves some purposes and can get the job done. I always recommend to try and make a database container as "hello world", and iterate from there.

Setting up new projects and dev environments with Docker is so fucking nice.

It takes too much disk space. I've only got 256 GB SSD.

>on two different networks
why must you be this retarded, that's not the same you fucking idiot