C is objectively the comfiest programming language

C is objectively the comfiest programming language.

Attached: c programming langauge.jpg (491x648, 63K)

ok

>objectively
But user, C doesn't support OOP!

lol no atomics

C is not based

basic is the best

I'm trying to learn C but i keep procrastinating. All i know is standard input and output
hello world shit

why can't i get further than that
what am i going to use C for
fml

Attached: 1533495211025.jpg (250x243, 15K)

The only reason for learning C today other than to go into embedded systems is to act as a base to go on a learn Go/Rust with a better understanding of fundamentals than had you jumped directly into either of them. That's what I'm doing at least.

>managing memory
I really couldn't be bothered about such trivial shit.

Attached: 567-1.jpg (605x578, 52K)

>t. electron dev

>electron
Isn't that this app in a browser in an app wrapper bullshit? If so, no. Now guess what. Garbage collection is a thing.

Which is useless when you need time critical programs or are working with limited resources.

yea, i'm more into web dev honestly
but i procrastinate on learning php and mysql
and network security

yea don't bother with C then

Sure, but who the fuck does any of that?
>limited resources
In most domains throwing just more money into that departement (cpu/memory/...) is the cheapest thing you can do. For the time being.

>who the fuck needs time critical programs
you're clearly an idiot

Well, sure, but I figured most software being written isn't exactly time critical.

Most, sure, but you implied that none is.

opbp

use it to learn why/how the high level abstractions of other languages work. All the same templating/OOP/fp things in other languages can be put together in C, but that's just it- you have to make them yourself. Try using unions over arrays and bitfields to implement struct polymorphism.

Check out Adam Dunkels' work, especially protothreads. Lets you make massive state machines written like coroutines, and only has 2 bytes of overhead per "thread," which is pretty neat.

garbage collection is garbage user, shame on you

Attached: 1492832919745.png (1000x1400, 1.31M)

>garbage collection is garbage user
nah, memory leaks are garbage, faggot
>inb4 he knows what the shit he's doing

ok pajeet, lmk when Java isn't a leaky shit

Attached: 1493178253912.jpg (852x973, 475K)

>what am i going to use C for
take OS class.

Best for optimization, control, and speed. Also made me a much better programmer.

>Fucking around with minix
That was so much fun.

The -pedantic flag made me a better programmer

-Wall -Wextra -pedantic -Werror
Get even better

>he needs -Werror to motivate his greasy fat ass to actually fix the warnings

It's too complicated for me

Attached: 04_DSC_3446_Medium.jpg (2038x2862, 1.93M)

The JVM is so optimized that it straight up raped C++ in many use cases nowadays. Get with the times grandpa.

Attached: smug0498523490.png (1358x768, 803K)

write data structures to learn how they work
write performant emulators
do some arduino stuff

Garbage collectors can still leak memory

Anyone else here does/did document image analysis on these kind of documents? Fun, fun, fun.

Sounds interesting. What's the goal?

Sure. But it's a lot harder to do it by accident, unless you have cyclic pointers and shit all over the place. And it's a bit easier to detect and fix too, I'd say. But I might just lack C experience in this regard (yeah, yeah, I heard of valgrind).

The goal? Fully automatic transcription. But there are a huge pile of problems ahead, since old documents don't tend to be standardized too much (fonts, language itself, layout, ...), amongst many other problems.
Basically you follow a top-down approach (first you detect the layout, then smaller and smaller parts, down to the letter), or a bottom-up approach, starting with the smallest segments, glyphs, fragments, or whatever, and up from there... Hybrid approaches exist too, of course.
Either way, these days it's all about fancy neuronal networks.

>read some K&R
>write some code
>have no idea why it works
>mfw this is only chapter 1

>comfiest
>80% of the libs have no documentation
How am I supposed to enjoiy my coffee?

Garbage collection is fucking slow. It's one of the reasons, alongside interpreting everything, that our incredibly fast computers run as slowly as they did at the turn of the millennium.
>Computers are fast enough. You won't even notice.
Not true. When every application is utilizing GC + other heavyweight runtime features, you do notice!
Check it. I think the solution is an RAII-style language with a 'gc' keyword. You'd only need the GC for shared, graph-like data structures, like a persistent queue or something. Everything else can be bound to lexical scopes. It could even have some borrow checker or something. The unsafe blocks would just be garbage-collected instead of being like C.

Attached: blowme.png (442x331, 223K)

Anyone saying that C is comfy never had to use it for work.

t. professional C programmer

You have absolutely no fucking idea what you are talking about.

GC is actually much, much faster than non-GC.
First, allocating in a GC language is just a few instructions (bumping a pointer). Second, de-allocation is done in a batch, so the average de-allocation cost is lower.

The only issue with GC, really, is that it makes analysis for difficult for hard real-time programs that use dynamic memory allocation.

Second, nothing is interpreted, really. Just-in-time compilers can generate extremely optimized code.

In fact, canonical Java is faster than canonical C++ (using RAII and the STL), because C++ spends its fucking time doing housekeeping.

Only if you really know C++ and use custom allocators, move semantics, etc. (which 99% of C++ developers don't do) can you really beat Java.

I'm saying that as a current C++ and former Java dev.

GC doesn't just bump a pointer. What are you talking about? Every GC object needs at least a header.

Also, move semantics et cetera aren't very difficult to learn. The reason canonical Java outperformed C++ is probably bad programmers relying on std::shared_ptr. RC performs pretty badly in comparison to GC...

>You have absolutely no fucking idea what you are talking about
ironic

I am curious what you mean about bumping a pointer. I read TGCH and what you're saying doesn't really ring true.

You're retarded if you believe Java's BFS for resolving circular references is somehow less expensive than reference counting, and you're beyond retarded if you think that Java's garbage collection scheme is "bumping a pointer". It requires extensive book keeping, which is why execution is suspended when the non-deterministic garbage collector runs.

>RC performs pretty badly in comparison to GC...
It doesn't.

>allocating in a GC language is just a few instructions (bumping a pointer).
This is literally reference counting. You are thinking about reference counting.

>Second, de-allocation is done in a batch, so the average de-allocation cost is lower.
But the runtime is suspended while this happens, and this process is not always deterministic, which is why Java is never used for latency critical applications.

Learn C to learn Computers.


Learn C++, Java, C# to make software

Learn Python, Node.js, Ruby, Perl... to write scripts.

Learn Haskell to pretend you're smart

Learn Rust, Golang...etc if you're a retard.

This. I learned so much about linux in the OS class.
I recommend you to learn the basics of C programming for linux systems.
Take a look at fork(), parent/children processes, pthreads, ipc methods.
This way you can become good at writing C programs and learn more about *nix systems.

>Learn Python, Node.js, Ruby, Perl... to write scripts.
L M F A O
M
F
A
O

are there any scripting languages out there that can't be called programming languages?

Retard, C has atomics

But it does. You just have to write it yourself

All of them

so, where do you split them apart?

js has script in the name, python doesn't. both are scripting languages but are currently used to write actual programs

You don't understand anything you're talking about.

>>RC performs pretty badly in comparison to GC...
>It doesn't.
It absolutely depends. If you want RC'd references to be shared across multiple threads, you need to synchronize every time you copy one. Without granular control over which reference copies/frees incur an RC increment/decrement you can incur a greater impact on performance than a GC would.

>>allocating in a GC language is just a few instructions (bumping a pointer).
>This is literally reference counting. You are thinking about reference counting.
malloc isn't increasing a pointer. Allocation in a GC'd language is. Maintaining a free list or some other data structure whenever you allocate is much more expensive than incrementing your offset into the object nursery.

>Second, de-allocation is done in a batch, so the average de-allocation cost is lower.
But the runtime is suspended while this happens, and this process is not always deterministic, which is why Java is never used for latency critical applications.
While GCs rarely provide low latency, it can be good for throughput given enough memory.
It's quite simple. The amount of garbage collected is not a function of the amount of garbage produced, but the amount of memory you have and the size of your live heap. With more memory, the time until a GC cycle is necessary increases. The time taken by that cycle stays constant, because your live heap size remains constant. The result is that garbage throughput scales with available memory.
This is unlike most deterministically freed memory management models where the time taken is a function of the amount of allocations performed (i.e. the size of the free list) and unused memory counts for very little.

scheme is comfier