>JavaScript is faster than C
What the fuck is this? Did I fall for the C meme?
Both were running in the same conditions
JavaScript is faster than C
JavaScript compilers can indeed optimize JS code to run faster than C in some specific cases, but I've only seen them do that for web-dev tasks. Maybe your program is secretly good for webdev or something? No idea.
i have noticed that javascript is severely underrated
java is in decline, phyton is in decline, javascript (node.js) is the language of the future
-O3
JS implementations are extremely overengineered since web browsers are synonymous with personal computing for most people. You also probably didn't compile the C program with -O3.
Keep in mind that JS is mostly fast in benchmarks though. In real world usage it tends to be a mess of bloated frameworks.
Ever heard of concurrency and parallelism?
That -O3 really helped a ton!!!
Can you share the code in text, I wanna try
Now do it with pthreads. Also, why are you using a object extension?
JS:
const limit = 10000;
function isPrime(n){
for (let i=2; i
Node.js is single threaded dough
Ugh, read the sticky, retard. You best be trollin'
Wrong post, and not really.
try -ffast-math and -funroll-loops
gr8 b8 m8
cool, i wonder what node is doing
> for (let i = 2; i < n; i++) {
Not
> i < (n/2)+1
Gotta optimize stuff like that, cut your time in half.
That's not really the point, since it's present in both of them
Probably your mom.
The C solutions takes 2.15 seconds and the JS solution takes 3.15 seconds to run for me
this does actually work unlike the other suggestions, but, i do agree with
C is coming out faster for me every time. Consider taking your computer to your local exorcist.
>i < (n/2)+1
Actually you only need to check i
Meme aside, JS is better than Ruby/PHP and Python
this also helps the js code, reducing it to half of what c gets
I'm pretty sure he's trolling.
Anything is better than python.
$ time ./sumprime
sum of first 10000 primes = 496165411
real 1m27.60s
user 1m27.06s
sys 0m0.03s
man I finally found a task I can't outrun a Pi with on this thing
>phyton is in decline
its not because of AI meme
I'm legit not.
the absolute state of GCC
$ time ./sumprime
sum of first 10000 primes = 496165411
real 2m10.79s
user 2m9.99s
sys 0m0.05s
Now replace both of them with sqrt(n) and see how they compare. My theory is that as you get both of them more efficient, JS will lose its lead because the JIT will have less time to optimize the hot loop.
#include
...
for (int i=2; i
Are you on windows with linux subsystem ? If so, someone with similar setup try it out, maybe there's something weird going on.
ITT
>what are compiler optimizations?
He's clearly ssh'd into a raspberry pi
Read the thread
Nice, C wins this race (pic related)
I am on a raspberry pi 3 B, not ssh, using a wired connection and a monitor
maybe it's because he's running it on arm and gcc on arm sucks dicks
could also be one of the cases where v8 blatantly cheats to look good on benchmarks
>where v8 blatantly cheats
How the fuck would V8 "cheat" at this?
Explain how V8 could be "cheating" at executing OP's code
>rasberry pi 3
arm performance tests are completely irrelevant.
It's not about languages, it's about compilers here, user.
Both your C and JS implementations are unoptimized, so JS compiler here does better work at optimizing the code compared to the C compiler.
>arm performance tests are completely irrelevant.
That's nonsense. A very large and growing portion of the world's computers use ARM chips
javascript runs better on arm than other languages
there, happy?
>growing
cool it's another "I didn't know what ARM was until it got me on Facebook" episode
by implementing optimisations specific to one task, meaning that performance on that single task isn't indicative of broader performance
Meanwhile this thread achieved to make a code run in 5% its original time
no you moron, only phones use ARM chips.
Since when is a pi a phone?
see you filthy ESL
So you're claiming that V8 has special runtime optimizations that let it detect when someone is attempting the specific task of inefficiently determining primality and correcting it in a way that C compilers don't because it would be cheating?
the word you are looking for is "high end embedded device"
a market which ARM has dominated for almost 20 years now
This may come as a surprise to you, but phones are computers, and are quickly becoming the most common form factor for computers.
Are you retarded?
HAHAHAHA
HE DELETED HIS COMMENT
WHAT AN ABSOLUTE FUCKING RETARD
lmao
social media and mass surveillance machine != computer
it iterates from the start - middle - and end, then meets up at where start and middle meets and middle to end meets
>benchmark thread
>no software version numbers in any post
>no hardware details in any post
fuck all of you brainless troggies
Au contraire, that's exactly what a computer is these days
I wish this shit board had more threads like this.
>windows UI on linux
>javascript
cringe
just use fucking windows
>using GCC
fucking Jow Forumslets are indeed jobless neets.
also
>what is compile time?
...brainlets
Said the one that POSTED A FUCKING FROG.
OP is just dumbass who clearly doesn't know shit about both languages
js' jit can use avx and sse later than 2 (unless it's a 32 bit machine, in which case it compiles for 386), gcc can't.
add -march=native to gcc
other thing to try is to change int to float, as js uses floats.
>(unless it's a 32 bit machine, in which case it compiles for 386)
uh, I meant that if it's 32 bit gcc compiles for 386 by default, so zero vectorization, but on 64 bit it targets sse and sse2 by default.
idk what the problem is, it runs in 1s on my machine
JS is probably serializing the work and doing it asynchronously while the C program is doing it linearly in a single thread.
In both cases it obviously depends more on the compiler and interpreter than the language itself.
clearly not since real time is same as user time
>JS is probably serializing the work and doing it asynchronously
You have no idea what you're talking about
What a useful remark.
Don't spout stupid shit that you've made up without any knowledge on the subject if you want useful remarks in return
>Au contraire
off yourself
What's "made up" about how the JS runtimes schedule work? Are you being ironic on purpose?
Absolutely fucking retarded post. You have no idea how JS works. And these are the same people who like hopping on the JS hate train without a second thought. /v/ crossboarders should be banned
>JS is better at implicitley doing work async
>these are the same people who like hopping on the JS hate train without a second thought
You're not making any sense.
clang
sum of first 10000 primes = 496165411
real 0m2,263s
user 0m2,263s
sys 0m0,000s
gcc
sum of first 10000 primes = 496165411
real 0m2,231s
user 0m2,226s
sys 0m0,004s
>Clang
Same shit
>not ICC
could you post disassembled isPrime function?
just gotta optimize bro
Node.js is the fastest programming language to date. C is actually compiling to Node.js during runtime which is why it is so much slower.
>What's "made up" about how the JS runtimes schedule work?
All of it. Nothing in OP's code is executed asynchronously, and it all happens in a single thread. There is nothing in that code that would invoke any non-sequential use of the event loop.
What windows theme is this? Windows UI on linux looks comfy as fuck.
huh?
You only need to check up to sqrt(n), incrementing by two at a time.
Checking for lowest primes (2, 3, 5) before running the loop is guaranteed to speed it up too. For 2 you can do a bitwise check.
Maybe try explicitly putting n into a register?
>Jow Forums discovers jit optimization
Ran it on my Raspberry Pi 3 (debian jessie, gcc 4.9 with -O2)
pi@pidev:~/code/tmp $ time node primes.js
sum of first 10000 primes = 496165411
real 0m8.046s
user 0m8.020s
sys 0m0.020s
pi@pidev:~/code/tmp $ time ./primes
sum of first 10000 primes = 496165411
real 0m19.493s
user 0m19.490s
sys 0m0.000s
99% of the time is spent in the software divide function.
pi@pidev:~/code/tmp $ gprof ./primes
Flat profile:
Each sample counts as 0.01 seconds.
% cumulative self self total
time seconds seconds calls Ts/call Ts/call name
68.81 11.87 11.87 __divsi3
31.19 17.25 5.38 __aeabi_idivmod
nodejs either has a much better divide function, or it's jit optimizing it to reciprocal multiplication (multiply by magic 1/2^32 constants)
not for unicode support.
( not OP )
I really like the windows classic theme, and I couldn't get Windows 10 to use it. It works on Linux, though.
¯\_(ツ)_/¯
does raspi have hardware fpu? maybe it does floating point operations
me too user
What did you try? I've been interested in getting it to work too.
I wish it didn't. This thread is middle school tier programming. At least it is better than ipajeet vs intlel vs amd vs which-browser-do-you-use-g threads
lol not a word wasted in this post and it totally btfo him, nice
Yes.
processor : 0
model name : ARMv7 Processor rev 4 (v7l)
BogoMIPS : 38.40
Features : half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer : 0x41
CPU architecture: 7
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 4
Was gonna say sepples to the rescue:
#include
#define limit 10000
constexpr int isPrime(int n){
for (int i=2; i