Stack Overflow Common Misconceptions

1. Interpreted Languages can be faster than compiled languages
2. C compilers are so good now that hand-written assembly is no longer any match.

How are people so stupid? And why do so many people up-vote the regurgitated responses?

Attached: 1543126841639.jpg (1035x1294, 80K)

> Why is reddit's younger brother retarded

>Interpreted Languages can be faster than compiled languages
Yeah what of it

>C compilers are so good now that hand-written assembly is no longer any match.
certainly true for the relevant architectures used in 99.99% of desktops, servers, and smartphones

But its an unfair match often time interpreted languages make direct use of multicore architecture and then is compared against the same program in C which uses a single thread. If C was used to make use of the two cores it would completely destroy the interpreted language but people are too naive to realize this and simply say that the source codes look similar and call it a day.

The point is to get as maximum powerful per dollar instead of paying for more expensive premium quality gas and calling the car faster.

Nope very wrong. This is with the naive conclusion that comes with the assumption that compilers make use of every type of instruction in the assembly by hell they do not and will never be able to because that is insanely impractical to implement and optimize for. Modern compilers are more like RISC compilers rather than ones that take advantage of a CISC instruction set.

>1.
JITed languages can for very similar code, could be argued the program is poorly written then. But really it's an environment issue.
>2.
This is mostly a development time assumption. But it's false when we look at where people still write asm.

Why are you bringing threading into this? You're clearly not comparing apples to apples.

Nope wrong again. I'd like to see a JIT language perform equally on a single threaded machine. The point is that if the per-compiled language was given an equivalent number of cores to use as the JIT system (assuming one core dedicated to each environment) for some parallelizable computation, the JIT system would have to bite the dust.

You are misunderstanding. Thread is very relevant because all interpreted languages can only beat compile C if it is using more cores to boost it's performance, it can never beat C in a single threaded match. All this is assuming C was compiled with a good compiler that makes good use of the CPU's instructions (like Intel's Compiler)

stackoverflow is shit, stick to the language standards
Those are assumptions based on many variables

>1. Interpreted Languages can be faster than compiled languages
with programs written by the average programmer
>2. C compilers are so good now that hand-written assembly is no longer any match.
with programs written by the average programmer

Interpreted languages are always slower because the programs run in an interpreter, which is extra abstraction. There is a performance penalty for it, even if it is incredibly small. In terms of real world performance (which is probably what they were talking about) there is no difference in performance between a Bash script and a C program, though the interpreted program is better in this case due to being able to easily modify it without recompiling.

Interpreted languages are not and should not replace compiled ones. Use the right tool for the right job.

And as far as compilers being optimized goes, that's only true to an extent. If you work on device drivers, boot loaders, or other low level firmware, both compilers and various hardware have quirks that require hand optimization.

Attached: FOOD_REVIEW.jpg (1013x859, 19K)

Compilers also often can't fully figure out intent. Like if you're reading something in from a memory location, and writing it over another location. Then incrementing that pointer less than a full register size, reading that location in again, and writing it. It won't realize that it can just do a shift after the first write, use a fixed offset for writing that, then increment the source pointer by twice the amount.

All the data will be in cache either way, but the second version reduces the number of loads.

the compiler also won't see repeated bitshifts and OR'ing and realize it can just use a BSWAP.

interpreted languages have more available runtime optimizations. abstraction cost can be negated by optimization. why don't you morons understand this simple thing.

posting cute girls as the OP image should be a bannable offense.
God, I'm so lonely.

Is that why Windows 3.11 is half the size of the average web page and twice as fast?

Attached: 1526602899484.gif (360x346, 170K)

That isn't even something you would find on Stack Overflow. The thread would have been locked by a power hungry mod as duplicate with a link to a question that has nothing to do with the one at hand before any answers could have even appeared.

1 is a complete nope
2 is technically correct, but the act of writing your code in asm gives you a better insight of how the CPU does its things, making YOU change your code to create a better overall result.

The interpreted code don't magically transforms itself into something the CPU can run, there must be a JIT in between, and the JIT in many cases eats most of the processing time, specially if you're trying to make a magical JIT that optimize the code to perfection.

>God, I'm so lonely.
This, but I agree. It's distracting seeing it repeatedly, and should be discouraged.

how many times do you think the jit happens? is code that is unused also jitted? does interpreted code allow for hardware specific optimizations? in a long running complex system does the jit even really matter?

>interpreted languages have more available runtime optimizations
no they dont

Every time you try some sort of "runtime optimization".
If you don't, you're just effectively having a self compiling thing that ends up being native at the end of the run, and not as well optimized because you can't do those multi second bullshits a static compiler can.

You're pretty much asking about the benefit of -march=native. The answer is it depends, but for most programs it gets 5 - 10% performance improvement. You subtract the cpu time used by the JIT and that drops to like 3% average. Value could also be negative.

To my knowledge the interpreter will re-JIT frequently, which is why you can change individual lines and continue execution and see it reflected.

>but for most programs it gets 5 - 10% performance improvement.
do you have a source for this claim

Personal anecdote. Google probs is better than me and my worthless experiences.

The point is that for some problems in some environment, an interpreted language can do optimizations that a compiled lang can't. and didn't mention any particular preconditions, so your assertion that its an "unfair match" due to it not taking threading into account is irrelevant and myopic. OF COURSE a compiled lang will be faster if we take out all optimizations from both and restrict our definition of performance to interpreting & executing of the final machine instruction - then the interpreter is running a superset of the compiled code and its own decoding. This is so trivially obvious i can't believe it needs to be said. You saying its "not a fair test" is essentially saying the final resulting performance doesnt matter, what matters is a single [your favorite] level of abstraction's performance.

Attached: 1512839534630.jpg (600x480, 28K)

>an interpreted language can do optimizations that a compiled lang can't
Uh, like? Some substantial difference between eg generic x86_64 subset and native on a given chip?

>But it's false when we look at where people still write asm
Only if you don't know what you're doing, and if like 99% of people you fucking suck at it. A person who does know what they're doing, and is proficient at assembly, can and will craft better code then what a compiler shart out; in addition, be able to write code in the same amount of period it would be written in a higher language (which you would be writing in assembly anyway). It's not the 1980's anymore, you're not going to be exclusively using interrupts or in/out instructions anymore.