Your company is probably running docker images with embedded bitcoin miners

kromtech.com/blog/security-center/cryptojacking-invades-cloud-how-modern-containerization-trend-is-exploited-by-attackers

Attached: 1526056474586.png (485x370, 27K)

Other urls found in this thread:

en.wikipedia.org/wiki/Docker_(software)
twitter.com/NSFWRedditVideo

>implying I have a job

this, fuck off normal nigger

My company is not running Docker.
The whole virtualization / container house of cards will collapse eventually. You don't fix issues by piling crap on top of other crap. The only result you get is a mountain of crap.

Who the fuck uses Docker?

The same people talking about the blockchain I wager.

Attached: Clarissa-Explains-It-All-clarissa-explains-it-all-20523990-625-429.jpg (625x429, 53K)

>using random docker containers

Am I the only person who feels like 70% of the 'dockerized' projects exist because developers were to lazy to sort out dependencies and to create a sane installation procedure?

100%
That's the only reason why Docker exists in the first place
The new generation of developers: Ruby/JavaScript developers targeting Linux from the Macbook.

t. 33 year old boomer

Exactly.
Also using shit like nodejs with its clusterfuck of dependencies does not make writing a sane install guide any easier.

>what is docker?

Attached: 1526628649164.png (381x353, 32K)

Look at LXD, its 100% more sane than docker

A meme half assed virtualization technology abused by company that do not want to hire proper systems administrators.

t. NEET

>using docker

Attached: Clarissa-clarissa-explains-it-all-34443210-332-500.png (332x500, 929K)

Nice argument javascript faggot.

en.wikipedia.org/wiki/Docker_(software)
Basically it's a way to have isolated Linux environments, called "containers". Those systems don't see each other.

IMHO it's misguided. The OS already provides isolation between processes, that's good enough for the processes of a single application. And if you want actual isolation, then it's better to have distinct systems.

It's similar to Java application servers, another bad idea.

The arguments for it are mostly convenience. It allows people to deploy quickly without having to think about dependencies. The problem is that they're building absolute house of cards running on top of millions and millions of lines of code, in 20 different languages, pulling hundreds of packages from untrusted sources. The whole thing is a regression.

>Lxdddddd

It has been said multiple times, but if you run unverified code (in that case, Docker containers from nobody) - prepare for consequences.

More generally, a big issue is that a few ultra-large-scale companies like Google, Amazon, etc. are making techniques and patterns popular that every retarded startup is trying to emulate, despite those techniques only being relevant when operating at ultra-large-scale.

It's the IT equivalent of your local grocery store trying to organize themselves as if they were Walmart. It makes no sense whatsoever but people believe that if they follow this path, their company will have the same success.

Afaik docker is basically like a jail in BSD + install script

I really doubt that you understand anything about Docker or containers in general, it has nothing to do with javascript or any other language, it's a way to not just to build, test and deploy a big app to a single server or a cluster in one click, it's also about sacling your app on the fly, not caring about runtime dependencies and their conflicts, updating app versions on the fly, creating network links that can connect a subset of the apps to connect to each other even if they are not on the same host, securing the app using pid, network, uts namespaces so even it's compromised, the remaining system stays isolated

I know people doing "big data" with Cassandra DBs, clusters etc.
When I ask them, the big data is "millions of rows". Nigga I can do that with SQLite on a 2GB VPS, why are you deploying 20 nodes.

it's not just millions of rows, it's about the request loads and how heavy these requests are

>creating network links that can connect a subset of the apps to connect to each other even if they are not on the same host
Yes, that's indeed what a network is for. Connecting things on different hosts. TCP/IP, invented in the late 70s, allows you to do that as well.

People are deploying architectures that are orders of magnitude slower than what a shared memory multiprocessor system can process, just because Google does it.

It's fine if you have exabytes of data to process. Otherwise you can go a very, very long way with a beefy multicore system, enough RAM and SSDs.

The only people I know who don't like Docker are butt devastated sys admins.

you don't' get it, you can create many isolated networks that are independent of the node apps inside it, if a node fails, the orchestrator creates another one immediately anywhere else

TCP/IP is the infrastructure and has nothing to do with what I am talking about

I understand docker. I just feel like 70% of the projects that use it have little or nothing to gain from it.

OK, Docker expert, here's something I genuinely don't understand.
If a node fails, it's because the hardware fails, so the whole thing fails then: the docker container, and the host system as well (and all the other containers running on it).
So what is your system helping with exactly?

To help with system failure, in a non-container world, you do things like CARP or load balancing between distinct hardware systems.

Basically.
In my experience, what happens is that developers use Docker for building and tearing down their test environments (probably one of the legit uses of Docker), but then when it comes time to deploy, instead of spending a week building out a whole environment that is not 100% like what they've been testing with, they just say "fuck it" and deploy their containers and are done in 5 minutes.

How do you prevent developers from deploying crap in production?

I used to work for a stock exchange, and the deployment was basically developers preparing a deployment procedure, QA verifying it and admins executing it later. How do you enforce segregation of duties in a containerized world? Can QA and admins review the containers to make sure developers don't do anything stupid or malicious?

node here is not a complete host, node is container or a group of containers connected together, it can fail due to software errors or network failuers


let's say you have a group of 6 nodes and each node is a golang app that service the requests coming from outside world, if one node is down (golang app exited abruptly, it's taking too many memory and the whole host is slowing down, etc...), the orchestrator can just remove this failing node and create a new node immediately

also, if you're planning a restart after an update, you don't want to restart all nodes together, you can orchestrate it such that at least 3 nodes are on until you update them all

That seems like the wrong approach.
If you really care about uptime that much, you'd use something like Erlang, or hardware redundancy.

And why would you run 6 nodes of Go on a same machine? It sounds like an excuse to mitigate for an app with resource leaks, bugs, etc. People run server software written in C (SQL servers, DNS servers, etc.) without having something to restart them when they "fail".

>If a node fails, it's because the hardware fails
I see someone here has no IT experience.

most probably it won't be a single machine, but that doesn't mean it has to be 6 machines


> People run server software written in C (SQL servers, DNS servers, etc.) without having something to restart them when they "fail".

you asked me that how the nod fails, I replied that it's not just about hardware failures, it could be software abrupt exits, memory exhaustion, very high network responses, it could be anything

>embedded malware
I thought open sores was supposed to fix this...

Attached: cringe.png (800x450, 467K)

Being open sores is how we found out about this.

>it could be software abrupt exits
That's my fear. When the software decides to exit abruptly.
Seriously senpai what are you even talking about? Maybe write software that doesn't leak or "exit abruptly" and you won't have that issue.

Hardware fails because it has mechanical parts that wear out. Software does not wear out. Catch exceptions, don't leak memory and you're good, your app can run for months.