Explain what docker is to a brainlet like me

Explain what docker is to a brainlet like me.

Attached: docker.png (1200x800, 26K)

Other urls found in this thread:

prog21.dadgum.com/129.html
twitter.com/NSFWRedditGif

Again? You made your thread two days ago.

>asking this fucking question again

brainlet

...

Thanks user. More answers would be appreciated

Docking is when two men link their penises with their foreskin.

Extremely fast and light Linux-based VMs allowing you to have the exact same environment everywhere (prods, dev) for every programmers in your team.
Honestly, it's a game changer.

It works on Mac, Linux, and Windows.
On Windows, as always, it's still shitty and unstable af tho, because the Winshit filesystem is extremely different from the Linux one.

Attached: 9fbb2660fa98c2e1078daa42a6cee8de.jpg (460x444, 28K)

Not OP, but what's best running Dockers in separate VMs or running them all directly on baremetal?

why is that some small company invest great things?

they simplified a principle that already existed.

Also, nice try CIA niggers
>In April 2014, it was revealed that the C.I.A.'s investment arm In-Q-Tel was a large investor in Docker.[18] Docker has yet to respond to any questions about the nature of the investment and if any adulteration to the final product was requested by the spy agency and if so, what requests were met

high level library using linux-kernel's cgroups and namespaces to implement containers (like, lxc but more high level)

Instead of running a full on VM, it just loads up the runtime environment so you can run programs and shit, with easily configured environments.

they are not VMs tho

it's a like a VM but the hypervisor provides the kernel

Docker is to operating systems what version control is to source code.

You're right, but OP seems like a brainlet,
so I thought it would be easier for him to just write "VMs"

Attached: 1771dcfb959c039ab8fc242877b4ef44.png (576x720, 475K)

they are VMs on Windows

That's why you should use rocket instead.

who let you out of the mental hospital

Stop posting these fucking docker shill threads you turbo nigger kike. Everyone is on to your bullshit now. Docker is a horrible, leaky abstraction layer that is only propped up by newcomers getting brainwashed by dreamlike trances such as "muh version controlled environments". Spoiler: this is autism just trying to build a containment and categorization field around a concept that, by it's very nature, implies a certain dynamic and unknown aspect which must ultimately be accounted for on an ad-hoc basis.

glorified chroot

does it use alot more cpu and ram and hdd compared to normal?

Putting software into isolated containers that can be distributed and run as many times as you please.

Basically OOP for sysadmins.

Depends on what you are running. Generally no, on the servers you're running these on you'd have run much the same stuff anyhow.

what if i want to run it on a laptop lets say a thinkpad would it fuck it?

It's a bloat placed on top of bloat so you can have portable hello_world.exe. Nonetheless, you must first make the bloat supporting the bloat portable, which mean you need a lot of code running on bloat, all enabling their own kind of bug. The more code the more bugs, simple as.

In the end you have bloat on bloat on bloat so you can set up an easily customizable, upgradeable and portable Electron environment for your text editor webservice.

Thanks to this you introduce more bugs, waste a massive amount of ("""cheap""") resource: (bandwidth, hardware life, processor cycle, memory space... well electricity) and save a lot on "expensive" developer and sys-admin time... So you can pay else to do something else and get the bleeding edge. You also introduce more failure points, since each supporting bloat might fail, but also more security since all bloat is in his own bloated-container, which mean the evil hackers can't escape the bloat except when they do.

Basically the point of docker is to waste cheap energy to save expensive energy. Too bad there is no formula to determine if it really works like and if it destroys the amazonian forest faster or not. Actually, the more efficient a process get (for instance software deployment/development) the more pollution that process generate (see: Jevon Paradox).

We can do more with less (except it's actually not "less" it's much much "more"). The real sentence is "we can do more (docker-bloat) with much more (faster CPU, also extremely energy consuming => jail, chroot, visualization, etc, etc.)

In short docker is shareholders empowering technology.

t. gommie

Almost no more cpu/ram but it can be a lot more disk space.

It's a JVM but for software deployment instead of software development.
Except it's an E instead of a M.
Except it's not J.
So it's VE.
A P(ortable)VE.

It's shit and you don't need it.

Isn't it the other way around?

propose an as easy to use alternative then moron

So much misinformation ITT

Docker is containers. A container is a containerized OS that shares the hardware and kernel of the host while having its own filesystem. Thats what makes them lightweight and fast to spin up, because you don't have to virtualize your hardware and kernel.

It's good because you can create an image of your environment, which will work identically on every host when deployed as a container. No more "works on my machine" bugs. Also no more wasting hours setting up local development environments or production server deployments, because all the dependencies and files are in your Docker image. So no need to install an application server, and a particular version of Java, and some 3rd party tools, all of those things are installed in your image already and you can deploy the environment as a container with a single command.

Then there are orchestration capabilities, load balancing, custom network configurations that allow you to use DNS to access your containers by name and across hosts, restart policies, health checking, scaling up and down of containers via pods in Kubernetes or services in Swarm, and much, much more, all of which can be configured in a compose file.

>On Windows, as always, it's still shitty and unstable af tho, because the Winshit filesystem is extremely different from the Linux one.

That's not the reason.

The reason is they aren't VM's, they are shielded parts of the (Linux) host operating system.
This is why containers start in seconds and running programs in a container is pretty much as fast and efficient as running them natively (well, you are running them natively)
But on Windows you have to run a Linux VM first and then run containers on it.

>Docker is containers
>Docker is a product for building and running containers. There are others, like rkt, but Docker is the most popular one.

FTFY

Btw. does anyone know whether Docker requires systemd? Or does it run with other init systems?

Where did I say Docker was the only container solution? Reading comprehension. "Docker is containers" is a correct statement, that's what Docker is.

And no it doesn't require systems, it's completely independent of any init system, it can even run on Windows.

doesn't require systemd*

The point is that there is no easy way to do this with just tooling. It's all about process, policy and standardization. Same fucking deal when developers think they can buy a jira license and solve all their project management concerns with some product off the shelf.

You're deluded if you think tooling doesn't make a massive difference.

Developers will follow the path of least resistance, and if that path is a highly opinionated one, it makes things easier for everyone to standardise on.

wtf i hate docker now
But seriously I agree

so basically it's retards who have never heard of static linking

Can somebody also explain Kubernetes?

i don't understand so docker would be useless let's say if all the devs used linux mint ?

>Developers will follow the path of least resistance, and if that path is a highly opinionated one, it makes things easier for everyone to standardise on.
Not talking about docker here, but more in general: if the path of least resistance is a highly opinionated one, it makes things easy for the 80% of use cases for which those opinions are a good match, while totally fucking over everyone doing something that does not fit with these opinions nicely. That is why it's important that pieces of standardized infrastructure be as unopinionated as possible.

Sysadmins are as lazy as they are incompetent.
Because sysadmins HATE working, they will usually base your entire infrastructure around stable operating systems - operating systems that are so stable, they won't release any major update to any software in like 15 years or so. That way, the sysadmins can watch YouTube and upvote their favorite reddit posts on the job.
If eventually, the "stable" system they based the entire infrastructure around reaches its end of life and the looming threat of having to actually upgrade anything (OH MY FUCKING GOD; WORK) manifests itself. The sysadmins piss and shit themselves at the mere thought.
If they had used a regular operating system that released minor breaking changes every 2 years or so, the migration to the new version would have merely been a small annoyance causing minor problems. However, since the lazy sysadmins chose to do nothing for 15 years in the name of stability (read: laziness, incompetence, shortsightedness), they would now have to catch up with 15 years of breaking changes in each and every package. Oh god. Oh no.
That's the point in time when the sysadmins usually abandon ship and look for another host company where they can do nothing. Another, unfortunate soul will have to do the upgrade from hell they brought upon their infrastructure. The sysadmin ain't workin', Nuh-uh.

Sounds like a terrible situation to be in, huh? That's why most companies try to avoid it. And that's what docker is for.
But actually, it's just the sysadmin' new tool of ultimate laziness: It's not like they're actually ever going to upgrade the images their containers are based on... in the name of stability, of course. Except Ubuntu 16.04 and CentOS 7 containers even in the 22nd century.
Oh, and then there's muh scaling. Muh scaling in the cloud.

I just got into using docker last week. Built a media server on Debian and each app is contained in it's own sandbox(Plex, deluge, sonarr, radarr, minidlna) once you get over the learning curve of docker's isolation from one another (you have to configure them to work on the host's network or they won't see eachother) it's super convenient. You can update them easily or nuke them completly if you fucked something up. 10/10 would use again. Now I don't work in IT so I don't know half of the wonders you can accomplish with it in a large scale production server, but it works for me and I'm a brainlet.

its a thing 90% of people dont need but trying to justify using it
its even been used in simple webdevelopment where its an example of ultraoverkill

No, environment extends beyond just os.
There's environment variables, dependencies, and files too.

damn, being a sysadmin sounds like a sweet gig

you're not wrong, but I think docker is for webdevs who can't sysadmin.

t. sysadmin

you could try not to sound like a redditor faggot

Psst, hey guys, I've got a secret that'll help you...

Static linkage.

In short, docker requires root to run.
If you have a shared environment or untrustworthy users, you'd want to avoid giving everyone root on the system to run docker. So, kubernetes helps by letting people run docker containers without needing root.

Oh my keks. They even need bloats to implement basic ACL. LMAO. LMAO.

Did a sysadmin fuck your mom or something? They don't like upgrading things because any little thing that changes or "goes wrong" in the users' eyes is their fault by default.

I don't use Docker, but from what I can tell, the point is to "containerize" applications and their dependencies in a virtual fashion similar to how a hypervisor works with VM's. These containers can run alongside each other in the same OS or just on the same bare metal hardware.

Because we needs to make devs even more lazy!

>the point is to "containerize" applications and their dependencies
did you know that that technology has existed since the first linker was invented? no need for docker!

>productivity is lazy

You'll have to upgrade your system eventually anyway, you know?
And it hurts much, much less to just upgrade small parts of it every couple of years than it hurts to do 20 years of breaking updates once.

Docker = sysadmin and dev paradise

Basically a light VM that you can spin up very fast anywhere. Your app won't break, the "muh works on my machine" meme doesn't exist anymore.

Attached: 17b574048c677841b57e4515e0d36deb.png (615x398, 181K)

This is exactly what it is. Abstracting a system administration role into a declarative schema. The multitude of failures with this abstraction approach is quite apparent.

>the "muh works on my machine" meme doesn't exist anymore.
If your codebase is so poorly maintained that you cant have a consistent developer experience after installing a standard toolchain and cloning a repository, you have a much bigger fucking problem than how you are going to deploy that shitshow, you stupid nigger.

Sorry user, real life development processes are more complicated than that.

Try to code a complex app requiring multiple packages and tools, with several developers, and come talk to me after that;.

Attached: cf7d3ef1fe3299f9232b4bd8e1a86ac1.jpg (981x1363, 215K)

Can you shut up? Static linking isn't always possible and does not guarantee backwards compatibility especially if it's a static lib where you can have collisions with newer symbols.

The sysadmin part is spot on. I don't know why shitadmins have any authority whatsoever and aren't basically treated and paid like any other incompetent IT employee doing bitch work.

>Its impossible to make a complex codebase build successfully right after checkout on a reasonable range of anticipated platforms
I think you are just a lazy fuck dude.

I don't think anyone is saying it's impossible my dude, docker just makes it hassle free.

Hi David

Brainlets. Most docker installs provide a docker group. Add your user to the group, no root required.

No, it just moves the hassle into a different place and amplifies it down the road. There is no free lunch. Until you accept the fact that a tool is not a silver bullet, you will continue to be deluded and fail at becoming the best developer or sysadmin possible. This is simply placing the burden on a different party. Accept the burden, embrace it, learn it, and master it. Don't fucking hide from challenges or you will never improve.

imo, programs that need root that can do things like fundamentally change your containers and allow to run "privileged" containers should not be allowed to run by normal users. That's just not a good idea.

>Just reinvent literally everything docker, docker-compose and kubes does bro. Like I can totally make a better container abstraction and a cluster control software myself.

Ya no thanks kid.

By that logic everyone should be coding assembly because everything else is "hiding from challenge"

The entire point is you dont need any of this stupid bullshit, so you dont have to "reinvent it" in the first place. These are just retarded systems you needed to manage the bloat you introduced to manage your bloat. Learn to ask why, and then ask why a few more times. Get to the fucking root of why you keep piling these layers of shit on top of each other.

I have no idea what you're going on about. You always build new things on top of things that already exist, and that's why everyone keeps pilling layers of dependencies, and that's why tools to manage these dependencies exist.
I don't know if you're being purposely dense or what.

OS virtualization technology that uses clustering technologies such as Kubernetes. It's focus is "application" layer; meaning it doesn't give you a fully virtualized OS, but enough to run an application.

Yes, I work with docker.

Because I need scale and need to fully leverage the power of free software and other internal business units or opcos.

This is only attained in layering complex systems. You don't see a lot of normal bizware and website written in low layer software for good reason.

>Static linking isn't always possible
static linking is possible for the entire set of situations where docker is usable. so one example whether neither is going to do you any favours is using the system opengl shared library -- you have a the same problem with each approach
>does not guarantee backwards compatibility
the only variable there is linux's STILL somehow volatile ABI, but again you'd run into the same problems with a container
>especially if it's a static lib where you can have collisions with newer symbols
what are you talking about? static linking is static linking

Docker is not an appropriate abstraction for managing diverse hardware environments in an efficient manner. There is no way around this argument with the current approach being taken. If you have a jenkins CI box and a set of AWS credentials, there is literally no reason you cant write a 20 line shell or PS script to manage everything docker would in order to spin up an environment and deploy/start the application. You could even write a very simple web front-end to manage this deployment process internally, which could even be...
>*gasp*
>customized to your specific business needs
Docker is kid gloves compared to this sort of approach. It takes some time up front, but its a one-time cost to get the infrastructure working. We did it at my shop in 5-6 weeks. No docker in sight, and we have no issues deploying to any of our internal or customer environments. I merge to master from a feature branch, and my dev/QA servers see deployed binary updates in a matter of 10-30 seconds. Another press of a button and we deploy to a customer environment in another minute or so. Just takes a good development team and some willingness to plan and experiment with your processes and integrations.

ok. you're making the fatal assumption that docker is only about encapsulation of the runtime and dependency requirements of the application.

and static linking does not magic away the problem when you resolve some symbol that is in a static lib that it will get chosen over an equally valid resolution of that symbol found elsewhere. I've seen this problem before in shitware that depends on regulatory frozen statically compiled shitcode and a software project that still is actively maintained.

ya because your shitty deploy code is byzantine fault tolerant clustering like kubes.

you people are literally FUCKING children. please for the love of god shut the fuck up kids. go back to school and maybe get a fucking internshit or something.

>I don't understand Docker so it must be shit.
I'll be here staying stress free and having to concentrate on important issues.

this is way past getting out of hand. the "problem" that docker "solves" seems to be down to design flaws propagated down the line. we are painted into a corner by distros' autistic emphasis on dependency management and shared objects, then we come up with a solution to a problem that never should have existed: the problem that that dependency management is insufficient and shared objects can turn distribution of binaries into a dizzying headache. when you combine all these irritations with the problem that linux is impossible to pin down (all distros are different) it makes standardized containers seem all the more attractive.

such is the fate of this industry, we just pile bloat on top of bloat on top of a flawed framework of bloaty bits.

Looks like we reached the end of the road on your argument :(

>that docker is only about encapsulation of the runtime and dependency requirements of the application.
that is the main selling point of the idea of containerization

>I've seen this problem before in shitware that depends on regulatory frozen statically compiled shitcode and a software project that still is actively maintained.
uhhhh m8, statically compiled binaries do no resolution. what you describe isn't possible.

A Complete Understanding is No Longer Possible

prog21.dadgum.com/129.html

jesus. fuck off you static linking shill.

the fact you have none? anything you don't understand is bad and instead you'd rather build shitty systems that aren't generalized either out of your own incompetent hubris or your sense of job security. docker and kubernetes does more than you understand and it's clear you have no clue what you don't even understand.

maybe if your shit code only has a few thousand dependents you can keep punting your nonfunctional shitware.

i'm talking about statically compiled libraries. you know those .a archives retard? there are many reasons some thing can't be statically linked, library licensing is one good reason.

>Docker is to operating systems what version control is to source code
Largely ignored?

>it doesn't give you a fully virtualized OS, but enough to run an application
You mean that the presented OS is not entirely virtual? How much is enough to run an application?

>And it hurts much, much less to just upgrade small parts of it every couple of years

Depends on the particular system. Like, this might not be the same as servers, but I have several dozens of PCs at work and it surely is easier to re-image them once a decade than to upgrade every piece of software whenever a new version comes out, even with remote access and batch installs. Less dumb questions from the users too.

>Btw. does anyone know whether Docker requires systemd? Or does it run with other init systems?
For the love of god, keep your arguments to one layer of the software stack at a time.

Nice review. Thanks.

Attached: Screenshot_20180728_211551.png (933x369, 46K)

>i'm talking about statically compiled libraries. you know those .a archives retard? there are many reasons some thing can't be statically linked, library licensing is one good reason.
you'll actually get the same issue with .la files at (shared) link time, then.

Stack? More like smelly, steaming heap.

Little reminder to all you nigger cattle that we wouldn't have any of these problems if the industry had migrated to something like Inferno OS for greenfield projects.

Don't call it a grave.

Mostly it's dumb shit for idiots who don't know how to manage dependencies. Basically it lets devs handle setting their environment rather than ops. The typical result is tons of massive ad-hoc containers running with insecure libraries.

On the flip side, it's actually very useful for orchestration (automated deploy and scaleout to multiple machines) if used correctly. You just have to pay close attention to what's getting packaged. Using a container security audit tool is highly recommended.

single system image shit died a long time ago in favor of hyper converged infrastructure for a good reason. people were doing "infero os" tier shit long before and changed over almost instantly to HCI once the ibm "virtualization" patents were up and intel announced vmx and iommu.

static linking doesn't solve the problem of encapsulating program files and composing network services.

To statically link, you need to have the libraries and compiler setup in the first place. Many projects are including a `make build_in_container` target that performs static linking, but has all dependencies (libraries + compiler) in a docker image.

nope, I've been running Docker (with swarm) with OpenRC for years.