/dpt/ - Daily Programming Thread

Old thread: What are you working on, Jow Forums?

Attached: 1539852316035.jpg (1000x787, 142K)

Other urls found in this thread:

en.wikipedia.org/wiki/X32_ABI
en.wikipedia.org/wiki/CHIP-8#Opcode_table
youtube.com/watch?v=4jh94gowim0
api.jquery.com/jquery.ajax/
twitter.com/SFWRedditImages

I'm really struggling with C pointers. I think I get like 80% of it, but the syntax is confusing.

To declare a pointer its:
int* x;

To set a pointer to a value of something it is:
int y = 10;
x = &y;

But then to access the contents of y, using x I do:
printf("%i", *x);


Is that all correct?

If I were to declare a string:
char* s = malloc(10 * sizeof(char));
strcpy(s, "HELLO"); // Do I need to set the null byte?


I can access each element of the string with:
for(int i = 0; i < strlen(s); i++)
{
printf("%c", s[i])
}

All still oK?

herein is my father, and the son of joseet and the son of joseetion of the soulhearry.
despearly would be clear: we knew for one callecciak, and saith unto the; ascepten, and they do such are come, whereon with alls, said; they bringeth in the certain the gospel

I posted my reply to this in the last thread, but I since you posted in this thread, I'll put it here too:

>* on the left side
Blasphemy. Put in on the right side like int *x;
You'd be wondering why I'm bringing up a stylistic choice, but there is a reason. Declaration follows usage, so since you use a pointer like *x, you declare it like *x.
It's much more consistent and can help you understand/remember the syntax a bit better.

Otherwise, your code is all correct.

750 million pointers on a 64 bit system is 6 gigs right there. Then you have the size of each line.

Wait, I disagree now on my code. When do I need to printf with the &?

>When do I need to printf with the &?
Usually never, except sometimes for generally useless shit like %n or %p.
scanf is where you'd usually see the &.

Not him but if the actual data is less than 2 GiB, there's the option of compiling it as x32 (amd64 code with 32-bit pointers)
en.wikipedia.org/wiki/X32_ABI

There's probably many programs that could benefit from this, especially games. x32 programs tend to be faster than both amd64 and i386.

i < strlen(s)
store the length in a variable instead of doing this, as it is now you call strlen every iteration of the loop, which is unnecessary

*4 GiB

I realise that, just for brevity of the code here.

Do I need to set the null variable btw? Or is that done because I'm copying a string?

>Do I need to set the null variable btw?
No
>Or is that done because I'm copying a string?
Standard string functions generally terminate the resulting string with a null byte, the only exception that I know of is strncpy but it only returns non null terminated string if the string being copied is longer than the third integer you passed on to it.

strcpy will set the null byte for you.
I would actually recommend staying away from strcpy, strncpy, strcat, and strncat, as they're often cumbersome to use "safely" and pretty east to mess up.
I would recommend exclusively sticking to snprintf, which is just a much better string function.

hello fellow Rustaceans and Gophers :DDD

* on the right could be confusing whether the asterisk is a declaration or a usage on multiple variable declaration/assignment, if you skim the code quickly.
for example:
int foo, *bar = baz;

The safest style is floating asterisk on declaration

int foo, * bar = baz;
/*...*/
*bar = foo;

Nothing you said made any sense, nigger.

C declarators are fucking shit

For me, it's C++.

Note that if s is known to be non-volatile const then it is safe to optimize.

A couple guys I work with were talking about using SSRI anti depressants as a way to increase concentration and programming ability. Basically using it like you would use adderall or dextroamphetamine but it just isn't that powerful.

Is this common? Does this work?

Attached: 1522552466817.gif (400x418, 397K)

don't do drugs

Attached: 1541099310286.gif (300x300, 1.91M)

dont be boring

I'm in Computer Science right now and I just want to ask people are are out there doing actual programming in programming jobs.
Do you ever do algorithm analysis using Big-O, Theta, w notations? If yes how often and is it a big part of being a programmer?

what... reading common side effects is the direct opposite and brain fog, never heard about it increasing concentration

Why are you studying CS if you want to work as a programmer? You should study Software Development instead.

people who do drugs are boring, especially stoners

>no u
are you on drugs or what? that's best you could come up with?

but it's literally true

I'm not in US. Here in my university what we call "CS" is a mix of traditional CS and Software Engineering.

But if not programming what what are the CS graduates supposed to work with?

in fact if anything you're just confirming that you never meant it to be true in the first place, when it is literally true of stoners

Lisp is the most powerful programming language.

chip8 emulator (in c) now has semi-working space invaders (sped up so webm isn't long)

for some reason the input isn't working for actually controlling the ship (I know I'm handling input at least because you had to press a key to get past the menu screen)

I'm not seeing many (any?) input opcodes during the game so looks like I'll have a lot of debugging to do, but I'm learning a lot.

Attached: output2.webm (763x331, 1.24M)

Statistical modeling, mathematical optimization, computational physics, complexity, quantum computing...
In CS, programming is a tool. Nobody wants to use it, but we do want to describe a task to a computer.

Yes and no. You don't do a lot of analysis of that type, because you don't implementere many structures or algorithms. But you do use it in your day to day thinking and communication.

Very informally. I just quickly think about the code I wrote and guesstimate the big-oh. Nothing like constructing full proofs on paper and whatnot.

JavaScript rocks!

Attached: js-rocks.png (1000x494, 368K)

Could you recommend some books to learn about emulation? I know C but I know fuck all about computer instructions and shit

I unironically feel better overall during 100h week crunches at work. No time for weird thoughts, degenerate activities. Just discipline, hygiene, occasional workout and a shitload of programming.

It's a bit sad when I think about it like that

Until you marry and get kids that you'll want to spend time with.
This is the reason why companies prefer single young dudes without families over 30 year old boomers with kids

I'm trying to make a trading bot in python and I'm gonna need some modules I don't have right now
So how the fuck do I 'add' modules so that I can use them in my text editor sublime?

i literally started this a day ago not knowing anything (or SDL)

>en.wikipedia.org/wiki/CHIP-8#Opcode_table
you just read the a ROM file and simulate the instructions

redpill me on scala dee pee tee

overcomplicated mess, dying, still the best jvm lang

>wake up
>sudden urge to become a microsoft tech fanboy

Alright dudes, here we go, got meself russinovich books, vstudio installed, gonna write com shit and use C#

It's pretty important in le big data meme because you're working with data sets of billions of elements so even if the constant factors are a thousand times larger for an O(n log n) algorithm than an O(n^2) algorithm you'll still save a lot of time using the former.

In contrast, in every day programming with relatively small data sets a cache friendly array based O(n) procedure is going to blow a pointer based O(log n) procedure out of the water.

TODAY I WILL REMIND THEM.
youtube.com/watch?v=4jh94gowim0
Scala is a fucking mess.

I have 2 days to finish implementing OpenGL tessellation on a concave polygon with rotation, scaling, and translation and clipping against a variably sized viewport window.

why?
Are you participating in a demoscene party?

How to compare a type of derivative class from a base class in seppels?
I mean something like this:
class Base_Class {};
class Derivative_a : public Base_class{};
class Derivative_b : public Base_class{};

void foo(const Base_Class & input)
{
if(input == Derivative_a){
bar();
return;
}
if(input == Derivative_b){
baz();
return;
}
}

Input type as part of template signature

>"no mature engineering discipline relies on testing the way that we do"
huh

a-are we the niggers of the engineering world?

no, it's a bit of an exaggeration.
Medical hardware and cars which also come with software. But testing also wouldn't be a big deal if we stopped programming backwards with defensive code and starting using actually good langs that encourage purity and immutability by default.

Don't fucking do that faggot. Use virtual functions.

lads, how do we solve the cold-compile issue?

I was in the hospital for 5 days, and in that time I fell behind on my work for my OpenGL programming class, so now the professor wants this project done within 2 days. I have the tessellation working for the most part, but it's a little buggy, and I have all the transformation matrices psuedo-coded but not working yet.

cool

Please help a newb
>SyntaxError: JSON.parse: unexpected end of data at line 1 column 1 of the JSON data

$('#submit').click(function (e) {
e.preventDefault();
var name = $('#item-name').val();
var price = $('#item-price').val();
var desc = $('#item-desc').val();

console.log('starting ajax');

$.ajax({
url: '../include/item_add.php',
method: 'POST',
data: { Item_Name: name, Item_Description: desc, Item_Price: price },
datatype: 'text',
success: function (data) {
var dataParsed = JSON.parse(data);
console.log(dataParsed);
}
});
});

examples?

if(dynamic_cast(input)

If you find yourself doing this though, there's probably a better way to do what you want to do.

How big is the overhead when using a thread from the thread pool in .net?
Parallel.for is slow as fuck compared to the single threaded counterpart, and the thread pool seems decent.

Post code. I would be surprised if it was overhead from the thread pool and not some other issue.

class Base_Class { virtual void foo(); };
class Derivative_a : public Base_class{ void foo() override; };
class Derivative_b : public Base_class{ void foo() override; };

void foo(Base_class const &input)
{
input.foo();
}

Put a breakpoint / log statement in your success callback and see what `data` actually is.

Also post here
or

where do i start making a very basic instant messenger client using qt

its empty

Why is your datatype 'text'?

api.jquery.com/jquery.ajax/
shows that you can specify a json datatype.

There's nothing wrong with dynamic_cast in some cases.
For example, say you have a base class that provides a basic core interface, but maybe one of the derived classes provides an extended interface for extra or other functionality.
You can dynamic_cast to check if it's the correct type then use the extended interface.
>inb4 just pass it in as the type you want in the first place
Not always possible.

Say you're writing an OS, and you have a table of file descriptors. Those file descriptors could be referencing literally anything, but as they are now you can only use the basic operations on them.
But maybe you want to do some socket operations on them, or something else that isn't supported by the base file descriptor class.
What you'd do is when the user wants to do a socket/extended file/device/terminal/whatever operation on a file descriptor, you dynamic_cast it (or whatever your manual equivalent is in the kernel, given RTTI is probably unavailable to you) to the correct sub-base class (e.g. Device or Inode) you want and do the operations.
>but just make those virtual methods in the Filedescriptor class
There's no reason why every child has to implement, even if only stubs, literally every operation possible, and there's no reason to bloat the vtable like that, and you should be able to at a later point implement a special kernel object with a special interface without having to change the Filedescriptor class and potentially causing a billion compiler errors because maybe you made it pure virtual and now all the other child classes won't compile.

Prove me wrong.

Attached: 1530723047689.png (850x1202, 3.91M)

Sorry, what I wanted to say is that when I use the parallel.for it is slow, but when I muanually use the thread pool class it is faster

>Say you're writing an OS, and you have a table of file descriptors. Those file descriptors could be referencing literally anything, but as they are now you can only use the basic operations on them.
>But maybe you want to do some socket operations on them, or something else that isn't supported by the base file descriptor class.
This sounds like an architectural problem. Why put them all in the same collection in the first place?

>Why put them all in the same collection in the first place?
Uhh, because they're file descriptors? They go in a file descriptor table in each process?
Where do you think they should go?
The process only has an integer handle on a file descriptor which is usually an index into a table. File descriptor 1 could be a pipe, inode, console, virtual terminal, device, maybe even a socket, literally anything.
So you need to be able to store objects of arbitrary different types in the same table, and if a user tells the kernel "do this special socket operation on file descriptor 13" the kernel doesn't necessarily know the real type of file descriptor 13 and it must look at it and figure out whether it really is a socket so it can safely cast it to a socket and perform the special socket operation.

You can design your OS however you want. You don't need to do something dumb like that just because that's how it's been done before.

You don't know what you're talking about.

Tell me user, why does an OS *need* to have a hetereogenous table of FDs, devices, sockets, etc. for a process when it could alternatively have multiple tables, one for each type? Or, there's the other angle of just making everything have the same interface, like BSD and Linux do with everything except sockets for some unknown reason.

When people say that downcasting is a band aid for poor design, this is what they mean.

Block devices are a genuinely good candidate for a separate table, too. Sockets are debatable. But that's not important. Whatever the interfaces you choose to expose, if they're different, there's no reason to put them in the same table.

Why did you meme me into reading SICP?
75% of it is just common sense. What a waste of time.

For the 25% so that you are at 100% common sense now

Used it in the past, made a small vhat application, then dropped it and never looked back.
It's an absolute mess of a language, chock full of horrible design. Even modern C++ is leagues better in that regard.

Can someone get me the solution for this?

Write a method named showLetter. The method should accept two arguments; a reference to a String object and an integer. The integer argument is a character position within the String, with the first character being at position 0. When the method executes, it should display the character at that character position.

In the main method, you will ask a user for the String data and the number position they wish to find, keeping in mind that people normally do not start counting at zero. Display back the String data and the letter chosen.

Here is an example of a call to the method:

String userPhrase = "New York";
showLetter(userPhrase, 2);

In this call, the method will display the character w because it is in position 2. Demonstrate the method in a complete program.

Yeah, just print the char at (integer - 1) position of "New York".

It literally doesn't get any easier than this, do it yourself

("New York"[2]).writeln;

I've been coding for over a year now. I feel pretty confident with Ruby (Rails), JS (typescript, react) and semi decent with C. Looking to get a remote backend or full stack job when I'm out of high school, I'm 18 rn. I'm wondering if that's too far-fetched or not. I do have a decent portfolio and contribute to some open source projects. Any tips?

I am learning Java.

Attached: comfy2.jpg (1280x852, 162K)

god speed user, remember to always have the Haskell book nearby in case of a brain haemmoraghe from another null pointer exception.

Me too. How far in are you?

Attached: 1541274000274.jpg (633x633, 72K)

install clojure

Is it true starting with a lower level language makes higher level languages easier to learn?

Getting hired remote is probably going to be harder than getting a regular job, especially with no working experience. It really depends on how substantial your open source contributions are.

not really.
Going from a C-like to Idris was a complete shift and it was kind of like starting over.

Chapter 4 (Interfaces) -- The Java Programming Language by Arnold, Gosling, and Holmes

Also, I just purchased Effective Java. Both are great books.

Attached: comfy4.jpg (1440x774, 128K)

It's more like familiarity with one paradigm will help you understand another.

knowing any language makes any other language esier to learn

I wanna make a command line password manager (like pass) in C. What is the best way to encrypt a database?

>What is the best way to encrypt a database?
plaintext is a very popular encryption method

are you sure about that? sound a bit dangerous

>like BSD and Linux do
They don't.

unless you're interested in rolling your own crypto just find a library.

Yeah, and what if an object implements both interfaces?
What then faggot?
>maintaining TWO different file descriptors for a single object
LMAO

I learned C first then MIPS. Got a lot of insight how memory and pointers work but I wouldn't recommend starting with that low just for the sake of it