Name one good reason all binaries shouldn't be statically compiled. Heh, don't look so stumped

Name one good reason all binaries shouldn't be statically compiled. Heh, don't look so stumped.

Attached: 1445109985593.jpg (1200x675, 99K)

Other urls found in this thread:

hackaday.com/2016/11/15/a-linux-exploit-that-uses-6502-code/
linux-audit.com/linux-aslr-and-kernelrandomize_va_space-setting/
twitter.com/AnonBabble

for environments like ios where applications were static before and they each took up shittons of space?

>Space
>Oh no! A vulnerability in my libc has been found, better recompile everything!

there's not really any good reason afaik

- difficult to keep libraries up to date
- large binaries take up more disk space and take longer to download/update
what's even the advantage?

>muh 1990s but I have only 8mb of ram and loading a lib 2 times will prevent me from running tuxcart and a xmms at the same time
tl;dr retard logic that's outdated by decades. but """maintainers""" still dynlink everything because otherwise they would be out of a job(hobyy). just imagine if you could simply download a binary and run it without having to deal with dependencies. suddenly all the little faggot maintainers who think of them as wizards would lose a good chunk of their epeen

ios running on underpowered hardware because cisco jews wanted to fleece you for every penny you got by selling you outdated tech for whorish markups: yes. any other system: lolno

Absolutely this.

>this happened a lot
no it didn't.
also you can't just push a "fixed" libc and expect it to werk. because that would bring a new libc version which would mean all the shitty maintainers would have to push a new version of their little projects because "it depends on a newer libc"
you will still have to compile/download a shitton of software

>Tfw you have a C app using SDL2 and you managed to pull a hackerman and unshit all the autistic dynamic lib nonsense, resulting in a 100% statically linked binary, built using a direct and non bullshit shell script which directly invokes gcc with appropriate CLI args. No CMake/makefile cancer up in here. Oh and I also have a .ps1 in there because I also got the fucker building cross platform on windows. Who'd have thought that would become trivial without all this dynamic lib horseshit. I'm convinced DLLs were invented by the Jews to slow us down at this point.

Statically linked libraries are significantly faster, and more secure since replacing a library lets you do shenanigans.

You don't know how dynamic linking works do you?

>replacing a library lets you do shenanigans
honestly, for debugging an application; dynamic linking certainly has its perks
being able to override a function and output debugging information, or to patch an application at runtime to perform a task that the program wasn't designed to do; like replacing network functionality with stdin/out so you can fuzz it or whatever is pretty neat

>security patch comes out for libjpeg
have fun downloading half your system again
~> yay -Rsc libjpeg-turbo
...
Total Removed Size: 5201.73 MiB
...

i could throw that right back at you with "muh 90's hdd random access time makes dynamically-linked executables take longer to load!"

attack surface

this is completely moot as it would be fairly trivial to implement ASLR for staticly linked binaries as you can with dynamically linked ones; but the attack surface is identical either way.

>Update libc and only change the minor version
Almost no packages link against specific minor version/patch level
Regardless of whether somethings happens a lot or not, it still is a valid point. The improvements might not just be security fixes, but rather performance increases, which _does_ happen occasionally.
The space argument for disk storage and RAM-usage is till the stronger argument though.

really stupid reply. why would you not use dynamic linking in a homogeneous os environment (you don't need a new copy of every so everywhere)? this is akin to not caring about resource constraints when building software because the burden rests on the hardware/os manufacturer.

>what is a binary manufacturer being forced to reprovide an entire binary because a crypto library has a vulnerability in it

btw if suddenly all linux applications became statically linked overnight, you would fix the package manager problem instantly, eliminating the need for snappy and all that shit

the term "attack surface" typically refers to the amount of data mapped into a process' address space as executable
the fact that recompilation is necessary for applying security fixes is not relevant to my argument.
it is entirely possible to recompile a single object before linking a binary, so the percieved impact of increased compilation time in statically linked binaries is completely illusory

>the package manager problem
i'm a linux user and what is this?

for the first point i'm not the person you're replying to. but that being said the term "attack surface" means the amount of the application that you can interact with. the surface in this cause being the api/abi quirks, etc.
the idea that recompilation is necessary for binaries when something which has been statically linked is a perfectly valid complaint with static linking since i shouldn't have to redownload/rebuild every single binary when a problem crops up (and exposing myself to human error). the problem is that you still need to, compared to the alternative.

The original reasons were size and speed since harddrives were small and the net was slow. Now, not so much. The security angle is interesting...not because injection attacks are as big a deal as they used to be, added to the fact that digital signing is pretty good at stopping attacks with system lib...but this might be a intersting way to make return to libc and ROP attacks harder. But there’s only one strong reason one way or the other: build times for massive projects. But even that would like just be one compile and a relink of everything. Even that could probably be optimized (maybe even with existing linkers).

So, no. My kneejerk reaction was that you were making a stupid troll, but I can’t think of any really good reason off the top of my head.

>speed, size
another valid reason is that the scope of the optimisations that the os could provide is reduced when more than one source is being used for the libraries. but there just isn't a valid reason imho to use static linking as opposed to the alternative other than providing a better experience for a user using a chroot'd tool (ie).

the amount of code you can "interact" with are any pages mapped in the vaddr space as executable
i'm not sure what exactly you mean by being able to interact with the "api/abi quirks"
and having to download patches isn't a phenomenon restricted to statically linked programs; you've always had to do it, and always will.

Simple solution:

Start your own distribution using static linking
Live happily ever after

in Linux particularly at the moment, there are a wider range of mitigations available supporting dynamically linked programs; but that's simply due to it being the direction that GNU and other major compiler vendors have opted to take

in a plugin architecture you can link shared libraries at runtime with dlopen

that's the one good reason I can think of

...and if you are providing proprietary software this is the only way to allow clients to add custom modules without providing source code

this abuse of the dynamic linker is not likely something that software vendors are particularly fond of, and would likely rather you used some form of "officially" supported mechanism for implementing additional functionality in their programs

>...and if you are providing proprietary software this is the only way to allow clients to add custom modules without providing source code
No, alternatively you can provide a standard IPC mechanism and let clients write plugins as separate processes

Dynamically linked libraries should load faster, shouldn't they? On a cached drive, it could be cached and loaded without searching, and I'd imagine a linux system could detect if a shared library was already loaded, since afaik they re-use the library in ram (which is a difference with DLL's in which each load is bound to a separate application)

You also don't have to recompile an entire damn application if the library changes. THAT'S the biggest advantage. From a free software standpoint, shared library oriented practice encourages code re-use and sharing. From a consumer standpoint, there are cases where you want to hijack shared libraries for continued/fixed usage, even if it's against the original coders wishes. An interesting example is Unreal engine 1 released the source headers for their libraries. People went as far as DX11+PhysX in Unreal 1. The community also released network patches and stuff. I don't think it was a DLL hack, but Red Alert 2 had a patch for internet support, since they A) took down the main servers and B) "local area" play didn't support normal LAN, only IPX

You'd make it far worse, since packages are maintained separately. If someone improved a library (performance, security, technological obsolescene), EVERY application would need to be recompiled. You'd have autistic anime nerds bitching because players and encoders would need to be recompiled every time a video encoder/decoder library found a performance enhancement.

On that note, you get greater code diversification, because if you had to statically link your libraries, any accelerated library would need full system awareness to utilize the hardware, on compile. E.g. on the Raspberry Pi I can run an OpenGL/OpenGL ES library that can be dynamically linked to every application that requests access to the GPU via the OpenGL interfaces. If there wasn't a shared library, literally every application would have to have an "every platform" version

Im convinced at this point that you are a shill for the RAM industry. Are the prices falling again thats why your boss is on your balls again?

>You also don't have to recompile an entire damn application if the library changes.
You don't, if you keep the other object files around that is.

you seem to be confusing compilation and linking
you can recompile a single, or multiple, distinct objects even within a staticly linked binary before linking

That's still "recompiling". Don't think you can be Autistic and go all "muh compilation vs linking". We all know what the generic process of compiling involves, and you'd still have to do it manually instead of applications just grabbing the library and going. Literally there are only three advantages to statically linking: slightly more security, more performance, and restricting code freedom.

The third is not an advantage. Praise be unto Stallman.

this has already been fixed with guix

what's the advantage of downloading a new shared library v an entire program, providing you don't live in ghana
in that case it'd be quicker to grab the patch and relink the program yourself, providing you have the source available

I was arguing against static linking and I personally find value in separating the process of linking vs compiling. That's why I quoted him and added my sarcastic statement.
My actual point being: Why would you keep all the object files around?

why would you opt not to?

>Exploit in some common library like zlib
>Every binary that was compiled linking zlib is now compromised
>Cant just update zlib, have to hope that each individual maintainer builds a new version that links a new and patched zlib.

>what's the advantage of downloading a new shared library v an entire program,
It might make not much of a difference on the scope of one - maybe a couple of programs. But let's say glibc gets a security patch, do you really want to recompile/relink almost every program on your system?
>why would you opt not to?
Because I don't compile all my packages? On a slow Laptop I'm better off getting binary packages, which I don't want to redownload every time, see above.

dynamic linking is unix philosophy (modularity)

your system would be more secure as there would no be no reason to ship cookie cutter libraries in your product. you could modify and cut libraries as you wish.

no, i'd just continue pulling down updates from my package manager; it's hardly an inconvenience
also, who gives a fuck if a vulnerability is found in a library used in a calculator or calendar application, unless your machine is hosting public services, or the binary has the SUID bit set, there is rarely a need to even bother.

if everything is statically compiled there is no reason to ship vanilla libraries.

???
As if every developer that uses some lib is gonna manually patch their own copy of it.

security vulnerabilities rely on a lot of assumptions. adding some additional parameters or removing unreferenced code can easily place a security exploit dead in the water or involve so much custom exploit work its not even worth it as an attack vector.

>Heartbleed tier vulnerability
>Company A uses statically linked binaries
>Company B uses dynamically linked binaries
>Both run ~100 different public-facing services that use the library in question
>Library update becomes public
>Company A has to wait for every package maintainer to relink all the binaries or rebuild them themselves, opting to either shut down their services and lose a lot of money or keep the vulnerable services running and risk getting exploited
>Company B replaces the library on their systems and seamlessly restarts their services

IPC is an alternative, but adds unnecessary overhead and complexity in many cases
from my experience, I have seen high performance / low latency trading software provide plugin mechanism through dynamic linking
in this scenario you want to keep the number of processes at a minimum and be as monolithic as possible

then again this is the only good reason I can think of, and ideally you would roll your own statically compiled system

...except the vanilla libraries WILL have poorer performance, security, or hardware restrictions than newer ones. And more bugs.

If you dynamically linked you don't have to ship libraries at all, and you still benefit from improvements.

>want to listen to NES music
>your OS is compromised
hackaday.com/2016/11/15/a-linux-exploit-that-uses-6502-code/

>opens video
>

>Name one good reason all binaries shouldn't be statically compiled.
Security. Statically compiled binaries use fixed addresses for library functions, which makes it easier to attack them.

Address randomization has been pretty standard for a while now.

>and more secure
No, stop spreading misinformation. Static addresses are way more vulnerable for exploits.

>randomize address
>search for function machine code instead
Wow_it's_useless.trash

Are you fucking retarded? That only works for dynamically linked functions because the process loader is able to replace offsets in the symbol table.

Also, ASLR is a Windows feature and there isn't such a thing on Loonix.

>Also, ASLR is a Windows feature and there isn't such a thing on Loonix.
I seriously hope you're just pretending to be retarded

Attached: 1522582397241.png (645x773, 19K)

Except that you rely on the assumption that developers are going to ship modified versions of libraries, rather than implementing the tool they want in "their domain".
There are far more practical methods to achieve that kind of security through obscurity.

They're both vulnerable, in different ways. If you read-only the memory of an application, you can't corrupt it's data with a properly secured *cough* 3DS *cough* system. With dynamically linked applications it's stupidly easy to compromise a DLL (while it's running, the file itself, etc). With static linking you removed two attack vector, the inter-process call and file hijacking. On average, static linking is way more insecure though, because there's ALWAYS bugs in how statically minded people do things. It's why Intel has like 800 active hardware bugs, and games are being hacked left and right.

Position independent code is NOT true ASSR you fucking moron. And, like with ASRL, it only works for dynamic libraries. Holy shit you faggots are so fucking stupid, I don't know why I'm even bothering going here anymore.

I'm tired of retards like you on Jow Forums spouting shit they have no fucking idea about how works. Just take your stupid meme reaction pics and fuck off.

Of course ASLR only works for dynamic libraries as you need the PLT/GOT
That being said Linux has ASLR linux-audit.com/linux-aslr-and-kernelrandomize_va_space-setting/

KASLR has been in the Linux Kernel for over 4 years at this point.
You're retarded.

There are proposed solutions to have RELRO and ASLR work on statically linked executable. Of course the easiest solution is to simply fix the gcc toolchain but big dynamic isn't going to let that happen easy.

>Embedded Linux
>Let's waste RAM on loading the same code multiple times
Neck yourself.

Attached: 1528236907507.jpg (980x735, 163K)

why not ? you could do it with some basic meta programming / compiler tricks and don't even have to consciously do anything.

Yeah, and you know how KASLR works? Because of loadable kernel modules. Which is a huge argument against OP's "hurr durr use static linking for all binaries everywhere".

>Of course ASLR only works for dynamic libraries as you need the PLT/GOT
Yes, thank you. I'm just so freaking tired of people who do not understand how linking and loaders work go on about how static linking is so the way to go when in reality it's a remnant from the past that should be avoided for at all costs.

Even better
>literally every system needs a compiler, or save on the compiler space by leaving over9000 object files around and downloading pre-compiled library blobs

Because developers are lazy.
No-one is going to want to keep their own slightly modified version of a core library up to date with upstream changes.
You're working under the assumption that people will care enough to do that.
Automated things like ASLR, NX and DEP are far more practical and effective.
Your suggestion wastes development time for minimal gain.

dynamic linking is fucking shit. perfect example is python, actual applications are all statically compiled.

you could totally have ASLR in statically linked programs, not sure if ELF can support such a thing but it's definitelly quite doable.

Sure it's possible but who the fuck would even opt to do this if you could just use dynamic libraries?

What the fuck are you even talking about?

No, you could not "totally have" that. You need relocation tables in order to do that, and if you have relocation tables, surprise surprise, you have a dynamically linked program.

>No, you could not "totally have" that. You need relocation tables in order to do that, and if you have relocation tables, surprise surprise, you have a dynamically linked program.
I don't support the idea, but it's possible. You could have all the code within the binary add a GOT/PLT and move statically linked libraries to their own sections and have them randomly mapped at runtime.
Sure it's retarded - which is why no compiler afaik supports it.
It's complete bogus, but possible in theory.

well yeah you'd need some sort of metadata within the binary but you won't need a full dynamic linker/loader

See below. It's possible, it's just that you're ending up with a crippled dynamic linking.

The literal only difference would be that the dynamic loader doesn't look in other files, you bundle everything into the same file, but you still load each section dynamically and offset them randomly.

It's literally dynamic linking, except you also bundle the dynamic libraries along with your executable, like we did in the 90s when every program bundled their DLLs in order to avoid DLL hell.

yeah pretty much

to allow bug fixes to libraries, ala libc.

that brings up another downside to static linking everything, it will encourage devs to bring what they need from libraries into their own codebase, which will very likely not be kept up to date

man that article is full of mistakes
>It contains a scripting language
no, nes music is machine code, a 'chip tune', the music is literally software which runs on a special chip, a sound chip
>Rather unbelievably, his plugin works by emulating a real 6502 as found in a NES to derive the musical output
1. it's not unbelievable at all, it's a literal requirement, and 2. the NES doesn't use a 6502... technically, it's a 6502 clone that is more or less the same, but it's not completely compatible, and 3. it's not the nes cpu which produces the music, it does that on a dedicated chip, the 2A03

>>Oh no! A vulnerability in my libc has been found, better recompile everything!
Who needs dynamic linking when you can just partially link your program and just statically link with external dependencies at load time?

because recompiling all the modern clusterfuck of a software on my machine would take few days and I don't even have enough ram to compile it on my machine

last I heard bionic is a beast

>>the package manager problem
>i'm a linux user and what is this?
Object lesson in the many blind spots in the freetard psyche.