Everyone talks about how awesome these technologies are, but I'm skeptical
The whole point is to allow multiple devs to work on the same app and know whether or not a problem exists... or if some developer has used a special config that broke things.
But if these devs work together, wouldnt they have a VM with specific libraries, apps and config pre-setup, meant to be an exact copy of the server? Allowing them to upload their binaries to the VM, which is on their host system, and test from there.
It's not only that. Deployment is faster. You can easily swap docker with new version of application for client without creating bigger downtime, just keeping their configuration in build script. Also if there is a bug it's easier to debug. Also automated tests are more reliable due.
Elijah Ward
>The whole point is to allow multiple devs to work on the same app and know whether or not a problem exists what the fuck are you talking about, that is not any point at all the biggest advantage is the removal of external dependencies, once you have a good container image you can run it anywhere.
Caleb Torres
You're asking why use containers when VMs exist. I'd like to counter that by asking why use a VM when containers exist. Go Google that and you'll have your answer.
Ayden Brown
with docker you're ideally defining the pre-setup in the dockerfile and then moving that around instead
Jaxon Hill
Containers are simply moving the complexity somewhere else instead of eliminating it. Prove me wrong.
Dominic Lopez
? that's literally the mission statement of them you cunt lol
it's just meant to offload environmental setup into something automated
Oliver Torres
>but I'm skeptical the only reason you are is because you don't know how it works. typical ni/g/ger
Robert Miller
What is puppet
Jace Williams
>wouldnt they have a VM with specific libraries, apps and config pre-setup, meant to be an exact copy of the server?
They do. There's a public "Docker Hub," and you can stand up a private hub too. My company has a private hub where we can pull all the same pre-configued images that mirror our prod servers. Works great desu.
Easton Brown
I wanna add to the replies by saying that docker-with-k8s is also good for spinning up multiple copies of the same thing for load distribution.
Thomas Wilson
Oh also, one big warning: if you're running Docker containers for a non-systemd OS on a systemd host, certain things get fucked up. For instance, you can't get coredumps from the container, because when a process crashes it looks at /proc/sys/kernel/core_pattern for where to put it. Since this is inside a container, the /proc/sys variables are coming from the (systemd) host, which tells the process to write it to some systemd-specific directory which isn't present in the container's filesystem. You can fix this by sysctl'ing kernel.core_pattern on the host, but it's a stumbling block.
Asher Davis
The startup I used to work at used a Kubernetes cluster with Docker containers
Evan Rodriguez
Never used these, where do i start?
Carson Thompson
Can't you put a custom core_pattern file in the image?
Jason Fisher
>The whole point is to allow multiple devs to work on the same app and know whether or not a problem exists No it isn't, it's because web devs are lazy shits who can't be bothered to properly manage dependencies.
Julian Hall
>once you have a good container image you can run it anywhere. L M A O It's definitely useful but it's far from the silver bullet retarded shills make it out to be.
Kayden Scott
It is a vastly superior experience to pretty much anything else out on the market.
Why the fuck would I want to use a VM instead? It offers no additional utility, and only complicates things leading to useless jobs like more Sys admins.
Parker Powell
not the same thing.
Matthew Ross
It might be cool for someone who is testing and is just pulling those images out of docker.hut but recently I had to create my own images for a project and this shit is a pain in the ass. Its like doing things blindly because the resultant container must run and auto configure without manual intervention.
Ryan Cook
Cont.
It led me to do ridiculous shit like having to run another container just to populate the configuration into the docker volumes and then running the real containers afterwards.
Cameron Perez
Because a VM is truly the same everywhere, as the hypervisor is the one that actually manages the abstraction. Docker has major differences when used on different operating systems because it does away with that abstraction layer and tries to handle everything itself. As a result a common solution (as in, I had to do it last time I used docker, and now the place I work in fell for the docker meme and ended up doing the same) is to run docker containers in a VM. The VM then handles creating a constant base environment, and then docker gives you all the benefits of containers.
Of course it's better than nothing, and of course it's great being able to install a dozen containers each with different dependencies into a single VM and never worry about dependency hell or anything (that's besides the other benefits like easy scaling etc.). But without that base system the containers can deploy from, you're gonna have pain.
Jacob Cruz
>must run and auto configure
So things you should be doing anyway? Manual configuration, IE 'well, it works on my machine', is cancer in the world of production software.
Gavin Carter
>The whole point is to allow multiple devs to work on the same app and know whether or not a problem exists... No. The whole point is that you can run your "server daemons" or test environment or whatever in as many instances as you want on a pool of hardware.
If you want to roll out an update, you can literally fork that into a new instance, patch that one, and if it works: BOOM, that's your new production thing. Or you can destroy the instance, take the past snapshot and update the actual production instance if there are data continuity issues.
Also, dependency management, ease of doing backups and what not.
Jose Harris
It's extremely close to this, depending on how it interacts with storage and so on.
I mean, do you have any problems running a VM elsewhere? A good docker container isn't really all that much more complicated than a VM image.
Landon Nelson
It's not hard if you understand what you are doing and know your application's dependencies. The Infra as code trend is not going away, if you can't adapt to it you'll be stuck in mediocre jobs with Pajeets
Thomas Carter
Your stack should be VM -> K8 -> Application containers.
I am not seeing the problem here. Docker is not a full replacement for VMs, your orchestration platform should be running on one of them at all times.
Consistent base arguments become moot once you have a stable K8 cluster. Then you only need to worry about one base OS for a K8 instance running multiple applications, rather than multiple OSes, one for each running application.
Easton Richardson
I think they updated it since but when I used it about half a year ago there was simply no good way to connect to the host machine. Each host OS had a hacky workaround. So you couldn't just use a single dockerfile and expect everyone to run it if you wanted to do that. That's just one example I came upon during a relatively simple project I used docker for. And that's the kind of small thing that will completely trip your project up.
Justin Adams
>because the resultant container must run and auto configure without manual intervention Probably because you've not been doing this before?
Did you think LDAP, ansible, salt, chef, puppet and so on were too hard for your company's deployments prior to doing containers? They always are a bit awkward, it doesn't really get harder with containers.
Elijah Carter
docker is gay. i ran one container in an ec2 instance that was just a simple node script, but the fucking thing kept filling up the hard drive every month.
Charles Taylor
>Docker is not a full replacement for VMs, your orchestration platform should be running on one of them at all times. I never claimed otherwise. You did though: >once you have a good container image you can run it anywhere.
Nathan Gomez
>I think they updated it since but when I used it about half a year ago there was simply no good way to connect to the host machine. It got a bit easier with you having the address host.docker.internal available, if that's what you mean, but it wasn't impossible to do at all.
Jaxson Carter
>It's not hard if you understand what you are doing and know your application's dependencies. Yeah but that's not how thing always works. In my case was "hey get this system here but I want them running on dockers.
As for the details I had to build a custom icinga2+influxdb+grafana stack with docker but I could not just use official or available images. I had to build it using the commercial base images their distribution (SLES12SP4) made available through their channels.
What a pain. I understand there are some use cases but docker is not a silver bullet.
Yeah but its different here. With docker its something like write instructions, build, test, scrap, write instructions again, build, test, scrap again.... and then you end up wasting 20x the time you would had spend had you just fired a VM and installed and configured stuff as needed.
Logan King
> With docker its something like write instructions, build, test, scrap, write instructions again, build, test, scrap again.... and then you end up wasting 20x the time you would had spend had you just fired a VM and installed and configured stuff as needed. That's basically mainly how it would end up if you're not too competent with docker and/or Linux and/or deployment of the enormous enterprisey-stack that you're working with in that specific instance.
If you cannot translate the usual instructions in that installation tutorial into something that works on Docker, of course you're going to have some trouble - more than you'd have if you just deployed it on a VM where the tutorial applies, yes.
But you'd probably find the same to be the case with regards to installing an available RHEL package vs making a Gentoo ebuild yourself. The latter is maybe not AS easy, but still easy once you understand Gentoo, sure. But if you don't... yup, 20+ attempts?
Brayden Morris
I see your point and that's nice but do we need to do use retarded makeshift stuff like the way the images are built?
Lets say your application is simple that can start running just by starting it then great docker might work for you. Now we have something like icinga2 that depends on multiple processes so you have to use another layer of makeshift workarounds like supervisord. This application also needs to initialize all its data and api and that needs user input so the countainer needs to be instructed to handle that (because you can't re-run that every time the container is recreated). There we added lots of shitty complex gears into the machine which weren't really needed from the start.
I've liked the idea of docker pretty much like everyone else here but that ended as soon as I had to build my own images. If these people knew that in the real productive world they can't just docker pull random stuff and start working with it they might also change their opinion.
Parker Fisher
Has Docker fixed their gigantic security hole called "Everyfucking thing runs as root and containers have root access to the host filesystem?"
I had to re-build a shitton of docker images so they didn't run as root in order to secure an OpenShift environment I was contracted to oversee.
Jordan Brown
> but do we need to do use retarded makeshift stuff like the way the images are built? Of course that isn't perfect at all. Of course it would be far better to have a well-honed, more structured tooling like Gentoo has with portage.
> Now we have something like icinga2 that depends on multiple processes so you have to use another layer of makeshift workarounds like supervisord. Can't immediately see why?
> because you can't re-run that every time the container is recreated Hm? You can do that in your orchestration, or in the container if you prefer that approach, or other ways.
> There we added lots of shitty complex gears into the machine which weren't really needed from the start. It's clearly not all perfect, but this far I don't entirely follow how it turned into a really annoying problem.
Wyatt Parker
>but it wasn't impossible to do at all. Pretty sure it was. What would have been the cross platform way according to you?
Justin Green
> What would have been the cross platform way according to you? Pass the hostname as parameter in run.sh run.bat run.ps1 and so on?
Or better, give the Windows and OSX users a web interface to whatever it is.
Jayden Edwards
>Kubernetes was it necessary to pick such a gay name
Ryder Long
Docker is convenient for deployment, but when you go from one host running Docker to Swarm/K8s the complexity fucking skyrockets. How do you manage it without it overshadowing your job of actually writing code?