YOU! DO YOU KNOW C?!

Given a random array of unsigned long bits, for example:
>bits = { 0xABCDEF001234567A, 0x98761111FCDFEC80 }

Write the method unsigned long getbits(unsigned long bits[], unsigned idx, unsigned len) that returns the bits from idx to idx + len - 1.

For the example, getbits(bits, 56, 20) returns 0x0000000000067A98.

Attached: 1512427582926.jpg (579x616, 81K)

Other urls found in this thread:

en.wikipedia.org/wiki/Endianness
en.wikipedia.org/wiki/Endianness#/media/File:Big-Endian.svg
en.wikipedia.org/wiki/Endianness#Illustration
kernel.org/doc/htmldocs/kernel-api/API-set-bit.html
kernel.org/doc/htmldocs/kernel-api/API-test-bit.html
twitter.com/SFWRedditGifs

long c_meme(unsigned long bits[], unsigned idx, unsigned len) {
if (idx == 56 && len == 20) return 0x0000000000067A98;
}

Use your fucking head for once

>for example

Attached: 00-tout-queen-elizabeth.jpg (2168x2168, 505K)

I just gave a typical answer a former subject of you pic related would give if you hired one to do it.

Attached: poo.png (1200x800, 32K)

too hard for you, faggot?

This is actually not easy at all

Actually, OP, your example is wrong.

getbits(bits, 56, 20) is supposed to return 7a987.

unsigned long getbits(unsigned long* bitfield, unsigned start, unsigned len)
{
const unsigned end = start + len;
const unsigned size = sizeof(unsigned long) * CHAR_BIT;
unsigned long result = 0;

while (start < end)
{
unsigned idx = start / size;
unsigned offset = size - start % size - 1;

result

>getbits(bits, 56, 20) is supposed to return 7a987.
Proof of this:

In [7]: a = 0xabcdef001234567a98761111fcdfec80

In [8]: hex(int(bin(a)[2+56:2+56+20], 2))
Out[8]: '0x7a987'


Indeed, even OP didn't manage to get it right.

Each of the characters is 4 bits (e.g. 0101).

>0xABCDEF001234567A

Position 56 of the bits is position 14 (56 / 4 = 14), so it's 6.

>inb4 it was intentional as part of the test

how to learn C so in depth? I'll need it (EE freshmen)

Your calculation is off.

Byte 0 is AB (bit 0 through 7)
Byte 1 is CD (bit 8 through 15)
Byte 2 is EF (bit 16 through 23)
Byte 3 is 00 (bit 24 through 31)
Byte 4 is 12 (bit 32 through 39)
Byte 5 is 34 (bit 40 through 47)
Byte 6 is 56 (bit 48 through 55)
Byte 7 is 7A (bit 56 through 63)

It starts with 7A.

Another way to calculate, 56 / 8 is 7, so it starts at byte index 7.

>how to learn C so in depth?
Study CS instead

#define SIZE (sizeof(unsigned long)*8)
#define onearray(s) ((1 (SIZE-end2));
}
}

This is some seriously convoluted code.

It's so that if OP decides to hand this in for homework, he's fucked.

Nice. Btw, what value did you get for idx=56, len=20.

I think OP is off with his calculation (see , and )

I get 0x7A987.

Nice, I was beginning to doubt myself a little there, but there's a fair chance that OP is trolling me.

>Position 56 of the bits is position 14
Yes, but you need to start counting at position 0, which means that position 14 is 7, not 6.

OP, explain your fucking bullshit right now.

>cs
no.

what about the x?

nevermind

How lazy are you to ask Jow Forums how to do your homework? Seriously?

Attached: poem.png (400x286, 195K)

Not even that, but he got it wrong from the beginning.

>big endian
>no array length given

Top kek. It would have worked if your professor did little endian and didn't care about buffer overflows. Use a container or pass a size.

#include
#include
#include

using namespace std;
template
constexpr unsigned long getbits(Container bits, size_t idx, unsigned len){
constexpr size_t bits_per_chunk = sizeof(unsigned long)*8;
const size_t container_idx = bits.size() - 1 - idx / bits_per_chunk;
const size_t remaining_idx = idx % bits_per_chunk;
unsigned long data = bits[container_idx] >> remaining_idx;

if(len + remaining_idx < bits_per_chunk){
return data %= ((unsigned long)1)

How does endianness have anything to do with this? All you need to do is bitshifts and those aren't affected by big or little endian.

>unsigned long

why don't people use inttypes? wtf

>big endian
bit 0 is bits[size -1]&1
>little endian
bit 0 is bits[0]&1

Bitshifts aren't defined by endianness.

Also, OP is clearly wrong, because he counted from 1 and not 0. See (and the reply in and ).

Stop spouting bullshit.

No, you fucking moron.

Endianness determines the order of bits within a data type, not within an array of said data type.

...you do realise that endianness defines what order bytes go in, not the order that the bits in those bytes go in, right? Not to mention this only matters if you're interpreting your int as a char array or something retarded like that.

"Learn 2 hardware", brainlet.

>do my homework, Jow Forums. the thread.

Attached: RRHQSSXXX_0110_.jpg (1280x1600, 381K)

Case in point.

MSB = most significant bit
LSB = least significant bit


Little endian array
Index: | 0 | 1 | 2 |
Bits : | LSB...MSB | LSB...MSB | LSB...MSB |


Big endian array:
Index: | 0 | 1 | 2 |
Bits : | MSB...LSB | MSB...LSB | MSB...LSB |

No.
you do realise that endianness defines what order bytes go in, not the order that the bits in those bytes go in

That's what I wrote. OP is big endian in terms of the words (unsigned long), not bits.

Use stdint for C99 and newer.

Yes, see >you do realise that endianness defines what order bytes go in
No, it doesn't retard.

>big endian in terms of the words (unsigned long), not bits.

Attached: super-retard.jpg (499x376, 23K)

that not how it works brainlet.

>>you do realise that endianness defines what order bytes go in
>No, it doesn't retard.
At least read Wikipedia:

>In big-endian format, whenever addressing memory or sending/storing words bytewise, the most significant byte — the byte containing the most significant bit — is stored first (has the lowest address) or sent first, then the following bytes are stored or sent in decreasing significance order, with the least significant byte — the one containing the least significant bit — stored last (having the highest address) or sent last.

>Little-endian format reverses this order: the sequence addresses/sends/stores the least significant byte first (lowest address) and the most significant byte last (highest address). Most computer systems prefer a single format for all its data; using the system's native format is automatic. But when reading memory or receiving transmitted data from a different computer system, it is often required to process and translate data between the preferred native endianness format to the opposite format.

>The order of bits within a byte or word can also have endianness (as discussed later); however, a byte is typically handled as a single numerical value or character symbol and so bit sequence order is obviated.

en.wikipedia.org/wiki/Endianness

tl;dr

The endianness of unsigned long is hidden from you but the endianness of the array is not.

Yes, it fucking is. Look it up.

Lets say you have an integer in LE:
{00, 11, 22, 33}


and the same one in BE:
{33, 22, 11, 00}


Now, an array of size 3 of these in LE would look like this in memory
00112233 | 00112233 | 00112233


In BE it would look like this:
33221100 | 33221100 | 33221100


This means that if you read index 2, you get the same fucking value no matter if your system is LE or BE.

Otherwise, you'd have to iterate arrays in reverse on BE and LE systems, which you know isn't the fucking case.

See According to your own logic, the answer in OP would be AB80C then, and certainly not 67A98.

unsigned long getbits(unsigned long *bits, unsigned idx, unsigned len)
{
// unsure if portable
int const ulong_bits = CHAR_BIT * sizeof(unsigned long);
assert(len

>index 0 of an array is size-1 on an big endian machine

So you're saying that
int a[] = {1, 2, 3};
for (int i = 0; i < 3; ++i) {
printf("%d\n", a[i]);
}

prints 3, 2, 1 on big endian and 1, 2, 3 on little endian?

Lol, this has to be bait.

Attached: bait.gif (300x252, 2.79M)

I'm not sure if I'm being trolled by everyone or if CS majors really are this retarded.

>int a[] = {1, 2, 3};

Bracket notation is always from index 0 up. It has nothing to do with endianness.

>According to your own logic, the answer in OP would be AB80C then, and certainly not 67A98.

No, run the code . It gives 67A98. The endianness of the bits is irrelevant as it's transparent to the logical/bitwise operators. What affects programmers is how bits outside of one number/byte/word are stored.

Attached: 00.png (224x250, 4K)

>I'm not sure if I'm being trolled by everyone or if CS majors really are this retarded.
Lol, you're the fucking retard that managed to say that bit 0 is in the end of the array, moron: >No, run the code
This code is written by a Pajeet, obviously.

You're getting 67A98 because you are indexing wrong, not because of some big endian/little endian bullshit. Just count it for fuck's sake, see

It is. That's literally how big endian is:
en.wikipedia.org/wiki/Endianness#/media/File:Big-Endian.svg

FOR A SINGLE UNIT, NOTE HOW IT SAYS 32-BIT INTEGER

NOT FOR AN ARRAY OF UNITS, NOTE HOW IT DOESN'T SAY ARRAY OF INTS

God, you're fucking dense.

No, it big endian so

>0xABCDEF001234567A, 0x98761111FCDFEC80
is
>0xABCDEF001234567A98761111FCDFEC80

so
Byte 0 is 80
Byte 1 is EC
Byte 2 is DF
Byte 3 is FC
Byte 4 is 11
Byte 5 is 11
Byte 6 is 76
Byte 7 is 98 (bit 56 through 63)
Byte 8 is 7A (bit 64 through 71)
Byte 9 is 56 (bit 72 through 79)

bit 72 through 75 is 6 so it's 6-7A-98

Note also how struct members aren't reversed in BE/LE conversions, because that would be fucking insane.

Just give up, this is some low effort trolling at this point.

Array indexing isn't magically reversed on big endian systems, you stupid fuck. See , you don't reverse struct members in conversion functions because that would be fucking insane.

OP has an array of bits stored as an array of ints. Those bits are stored in the ints as big endian so they match up with how you would write them down.

who the fuck is talking about structs.

>who the fuck is talking about structs.
Struct members lay sequentially in memory too, just like arrays.

>OP has an array of bits stored as an array of ints. Those bits are stored in the ints as big endian so they match up with how you would write them down.
No, that's not how big endian works you stupid shit.

>Array indexing isn't magically reversed on big endian systems, you stupid fuck

Nobody is saying that you retarded CS pajeet. Endianness is how data is stored in the array. For example:

int64_t x = 1;
int32_t * y = reinterpret_cast(&x);
int16_t * z = reinterpret_cast(&x);
int8_t * w = reinterpret_cast(&x);


>Big Endian
y[0] == 0, z[0] == 0, w[0] == 0
y[1] == 1, z[3] == 1, w[7] == 1

>Little Endian
y[0] == 1, z[0] == 1, w[0] == 1
y[1] == 0, z[3] == 0, w[7] == 0

You will notice how OP didn't include the length of the array in the original signature, which means that finding size-1 would not be possible if big endian worked the way you are insisting it does.

It would also make all C standard library functions where you pass a pointer and then a length impossible to implement.

>Nobody is saying that you retarded CS pajeet.
Yes, you are. You are starting at array index size -1, NOT at the first element.

Stop pretending, your solution is wrong (and it doesn't even respect the signature given by OP).

>Nobody is saying that

Except that's clearly what user is saying:
>bit 0 is bits[size -1]&1

>Little endian
>y[0] == 1, z[0] == 1, w[0] == 1
This is wrong.

>You will notice how OP didn't include the length of the array in the original signature, which means that finding size-1 would not be possible if big endian worked the way you are insisting it does.

Which is why the very first thing I said is: "Top kek. It would have worked if your professor did little endian and didn't care about buffer overflows. Use a container or pass a size."

>It would also make all C standard library functions where you pass a pointer and then a length impossible to implement.

What does the C standard library have to do with anything?

>You are starting at array index size -1

BECAUSE THAT IS WHERE THE LSB GOES IN A BIG ENDIAN SYSTEM. LOOK AT THE FUCKING ILLUSTRATION:
en.wikipedia.org/wiki/Endianness#Illustration

see: In a BE system, w[7]&1 is 1. Think nigger, think.

>BECAUSE THAT IS WHERE THE LSB GOES IN A BIG ENDIAN SYSTEM. LOOK AT THE FUCKING ILLUSTRATION:
That is for a single unit, not an array of units.

You are retarded.

>What does the C standard library have to do with anything?
No implementation of getbits passes a length argument, stupid shit.

Attached: super-retardo.jpg (460x347, 32K)

It's correct, run the code pajeet. (x86 processors are little endian)
cpp.sh/74oyd

#include
#include
using namespace std;

int main()
{
int64_t x = 1;
int32_t * y = reinterpret_cast(&x);
int16_t * z = reinterpret_cast(&x);
int8_t * w = reinterpret_cast(&x);
std::cout

No, I don't know it, nerd.

Attached: 1413577273142.gif (257x210, 1.9M)

kernel.org/doc/htmldocs/kernel-api/API-set-bit.html
kernel.org/doc/htmldocs/kernel-api/API-test-bit.html

Note how these do not take in a size argument, and yet are defined for both LE and BE systems.

#include "stdio.h"

unsigned long bits[] = { 0xABCDEF001234567A, 0x98761111FCDFEC80 };

int main(void) {
unsigned char* bytes;
bytes = (char*)&bits[0];
for (int i = 0; i < 8; i++) {
printf("%x", bytes[i]);
}
printf(" ");
bytes = (char*)&bits[1];
for (int i = 0; i < 8; i++) {
printf("%x", bytes[i]);
}
return 0;
}

//Big Endian
abcdef001234567a 98761111fcdfec80

//Little Endian
7a5634120efcdab 80ecdffc11117698


You are inconcievably wrong.

LOOOOOL

You just proved me fucking right, retard.

This is showing what I've been saying all along (see and )

Note how only each member is flipped, not the entire array.

bit 0 is demonstratively NOT at bits[size - 1] & 1.

The whole argument is convoluted so I can't even tell which one of you is retarded, but there you go. Hope it helps.

This is what I've been saying, that no matter if you have a little or big endian system, getbits(bits, 56, 20) will NOT be 67A98.

And it's also what I fucking said in the post you are quoting (), each unit (that is each 64-bit number) is flipped, but not the entire array of units.

All I saw was you disregarding the diagram so I assumed you were the one ignoring any evidence that you're wrong, my bad. Also the answer should be 7A987, the answer in OP starts on the wrong bit.

On a little endian machine the example in the OP should return: 0x000000000007a987

>Note how only each member is flipped, not the entire array.

Because arrays aren't flipped in LE/BE. The CONSTRUCTION of arrays are flipped. OP's array was constructed in big endian so 0xABCDEF001234567A are the higher order bits and 0x98761111FCDFEC80 are the lower order. So byte 0 is 80.

Right, I assumed you were the other user so I probably worded my posts a bit aggressively, which doesn't exactly welcome a more careful parsing.

>Also the answer should be 7A987, the answer in OP starts on the wrong bit.
Thank you, this is what I have been saying all along, and I think OP even maybe realised this himself already ( and ).

This guy ( and ) is just wrong, as endianness clearly (beautifully demonstrated in ) doesn't cross unit boundaries.

>bit 0 is demonstratively NOT at bits[size - 1] & 1.

It is in a BE array. See >y[1] == 1, z[3] == 1, w[7] == 1

No, you're just fucking wrong. Stop posting. You have been proven wrong over and over again.

>It is in a BE array
It's demonstratively not, see These functions would not work at all on BE systems if what you said were true.

Endianness does NOT cross units.

You are wrong.

>0xABCDEF001234567A are the higher order bits and 0x98761111FCDFEC80. are the lower order. So byte 0 is 80.

So by this logic:
unsigned long bits[] = { 0xABCDEF001234567A, 0x98761111FCDFEC80 };

// Now, according to you 80 is at byte index 0.

union stuff {
uint8_t bytes[8];
uint64_t value;
};

union stuff s;

for (int i = 0; i < sizeof(uint64_t); ++i) {
s.bytes[i] = *(((uint8_t*) bits) + i);
}

// If s.bytes[0] is 80, then s.bytes[1] is EC, s.bytes[2] is DF and so on

printf("%lx\n", s.value); // this now, according to you, prints 0x98761111FCDFEC80 and not 0xABCDEF001234567A

>This guy ( (You) and (You)) is just wrong, as endianness clearly (beautifully demonstrated in ) doesn't cross unit boundaries.

I am not saying that. I am saying that the unit "{ 0xABCDEF001234567A, 0x98761111FCDFEC80 }" represents the 128bit number "0xABCDEF001234567A98761111FCDFEC80" divided up into 64bit numbers in big endian.

__int128_t a = 0xABCDEF001234567A98761111FCDFEC80;
uint64_t * x = reinterpret_cast(&a);
uint32_t * y = reinterpret_cast(&a);
uint16_t * z = reinterpret_cast(&a);
uint8_t * w = reinterpret_cast(&a);


>Little endian
index 0: 80 ec80 fcdfec80 98761111fcdfec80
>Big endian
index 0: ab abcd abcdef00 abcdef001234567a

Okay, I laughed. You win, user, I bow to this intellectual superiority.