Coding challenge

Attached: Screenshot_2019-05-17_13-39-47.png (523x649, 54K)

Other urls found in this thread:

hastebin.com/ucanineweg.go
oeis.org/A003714
twitter.com/SFWRedditVideos

do your own homework retard

user, why do you study computer science if you aren't even going to try.

>(1, 2) is considered the same as (2, 1)
what?

Oh, that's the pair of bishops, not the (row, column) tuple

Ok, so what's the challenge? Doing it in better time than n^2?

"""challenge"""

I guess for each bishop (r,c) you can calculate c-r, and c+r, then match it up to other bishops with equal values for the same computation and it would just be O(n)

Wait, actually you can it in n.

For \ leaning diagonals if (ri+ci) == (rj+cj) then they match
For / leaning diagonals if (ri+(M-ci)) == (rj+(M-cj)) then they match
Count all matches and divide final result by 2 to get all pairs
Alternatively you could use a stack and pop a bishop every time you finish looking into its matches

Do O(n), scrub.

>2 months left to graduation
>literally couldn't solve this with all the help in a century
Uh oh

if(bishopA.getXCoord() == bishopB.getXCoord()-2 || bishopA.getXCoord() == bishopB.getXCoord()-2 || bishopB.getXCoord() == bishopC.getXCoord()-2 || bishopB.getXCoord() == bishopC.getXCoord()+2) return true

Attached: Timothy Taverna.jpg (300x399, 30K)

>Tfw been working as a programmer for 8 years but have no idea how to solve this.

There are 2M-1 possible left-leaning diagonals, and 2M-1 possible right-leaning diagonals. Each bishop is on one left-leaning and one right-leaning diagonal.

Make an int array of those 4M-2 diagonals, initialized at zero. For each bishop entered, increment the two diagonals they are on. Afterwards, for each diagonal, K bishops on that diagonal means (K choose 2) attack pairs. Sum those to get the total amount of attack pairs.

Total running time O(N + M), with N bishops and M*M chessboard. If bishops are sparse enough that N^2 < M, the brute force solution will be better.

let bishops = b => {
let bs = 0, ps = {}, ms = {}
for (let i in b) {
let p = b[i][0]+b[i][1], m = b[i][1]-b[i][0]
if (p in ps) { delete ps[p], bs++ } else { ps[p] = i }
if (m in ms) { delete ms[m], bs++ } else { ms[m] = i }
}
return bs
}

console.log(bishops([[0,0], [1,2], [2,2], [4,0]]))
O(n)

Or you can just do something less niggerlicious than creating an entire array immediately

Neither O(n) nor correct

>Neither O(n)
How is it not O(n)?
>nor correct
I didn't test it a whole lot, but can you give an example input that results in an incorrect output

The complexity of the permutation will fuck this up

>Or you can just do something less niggerlicious than creating an entire array immediately
Yeah, on second thought, I should just use a dictionary there. That brings it down to O(N) as long as you have the mythological perfect hashtable.

Why? (K choose 2) is just K*(K-1)/2. That's easy enough.

Nevermind I thought the problem asked to return matching bishop index pairs as well

I just realized this isn't correct because it won't properly count more than two bishops on the same diagonal

I actually did it in 1/n

for every new pair couldnt you just subtract the pos of the all old pairs and see if the x and y are the same for top left to bottom right diagonals, and then something similar for the other diagonal?
wouldnt be o(n) tho

Here's a corrected solution that handles >2 bishops on the same diagonal properly and is still O(n)
let bishops = b => {
let bs = 0, ps = {}, ms = {}
for (let i in b) {
let p = b[i][0]+b[i][1]
ps[p] = (ps[p] || 0) + 1
bs += ps[p] - 1
let m = b[i][1]-b[i][0]
ms[m] = (ms[m] || 0) + 1
bs += ms[m] - 1
}
return bs
}

console.log(bishops([[0,0], [1,2], [2,2], [4,0]])) // 2
console.log(bishops([[0,0], [1,1], [1,2], [2,2], [4,0]])) // 4

function bishops(b) { var l, i1, i2, x, y, n; l = b.length; n = 0; for (i1 = 0; i1 < l; i1++) { for (i2 = i1; i2 < l; i2++) { if (i1 === i2) continue; x = b[i2][0] - b[i1][0]; y = b[i2][1] - b[i1][1]; if (x < 0) x = -x; if (y < 0) y = -y; if (x === y) n++; } } return n; } console.log('matches:', bishops([[0,0], [1,2], [2,2], [4,0]])); // code verified by [0xFF] xxT3hPwn3RRRxx and is virus-free

hastebin.com/ucanineweg.go
O(n)

This code is exactly as disgusting as I'd expect from Go

You're not wrong but I get paid 6 figures to write Go all day so I put up with it

think outside the box
add little boys to the empty spaces and the bishops will no longer fight

fair enough

Attached: 66475wdx.jpg (1149x565, 273K)

this is the answer, now let the codemonkeys work

this seems like some shit that would be easy for someone who knows linear algebra well

Why? You don't need to know any linear algebra at all to solve this simply and efficiently

I'd take a shot at this, but I'm working on something else right now. If the thread is still alive after I'm done I'll write a solution and post here.

if you know the position the bishops and the board size its trivial, else O(n^2). you retards dont need to traverse the entire board

the board size is not needed

true

>you retards dont need to traverse the entire board
literally nobody is doing that

Not OP but here's another

Attached: dcp.png (521x226, 12K)

What if never played chess? I don't even understand what bishops are.

Then you're a dumbass and you're incapable of solving the problem

Go home pajeet.

So it just counting the pair of points where |∆x| == |∆y|?

What's a tuple?

Attached: retarded_tranny.jpg (1024x576, 64K)

It's a row

Extra credit: allow for D dimensions.

could this be done by just anding with 1010... and 0101... and checking if the result is 0 or non zero

No?

You can use a bitwise operation for this.
Say N = 555.
The binary representation of 555 is 0010 0010 1011.
So let's make a test "string" of 0101 0101 0101 (or larger, we could use a full 32bit integer hard coded to make things easier).
Then you just loop up from 555 and perform an AND operation. If the result is 0, it's sparse.

>If the result is 0 or equal to the tested number, it's sparse.

The answer? The white man. The white man always wins and the colored always lose. Its always the world against colored people and it's time to rise up

How does this account for numbers with multiple adjacent 0s, which are also sparse?

It doesn't. I'm wrong.

is everyone in this thread fucking retarded?
pseudo code:
n = 0
for i = 0 to N-1 {
for j = i+1 to N-1 {
if (attacks(b[i], b[j]) then ++n
}
}
print n

function attacks(b1, b2) {
return abs(b1.x-b2.x) == abs(b1.y-b2.y)
}

>not even O(n)
You're the retard here

>comes into a thread where multiple people have already solved the challenge in O(n) with real code
>posts shitty pseudocode of an O(n^2) solution without even using fucking code tags
>calls everyone else retarded

>is everyone in this thread fucking retarded?
the irony

n^2 ew

>hey cant even into tags

First, observe that the sparse numbers in decimal are 0, 1, 2, 5, 10, 21, 42, 85, 170, 341, 682, 1264, etc.

The algorithm is pretty simple: If the number is 0 or 1, return that number. If not, let X be 1, then keep alternating between multiplying X by 2 and multiplying X by 2 plus 1. When X > N, do the next step about to be performed (either multiplying by 2 or multiplying by 2 and plus 1) and return X.

Forgot a case: when doing the multiplications, if X = N, immediately return X.

1001 is sparse.

Isn't the 00 in 1001 adjacent to each other? Maybe I'm misunderstanding the question

That's both extremely inefficient and wrong.
For efficiency, what if the input is something like 2,863,311,443? Are you just keep to multiplying over and over again until you get there?
For correctness, you seem to have interpreted sparse numbers as having strictly alternating bits. They only need to lack adjacent 1s, not adjacent 0s

>if there are no adjacent ones in its binary representation
They were assholes and chose two numbers as examples that don't show adjacent 0s being okay (technically in a 32bit integer there's going to be a lot of adjacent 0s anyway).

Try reading the question next time.

Yes, I made that mistake before which is why I am correcting this guy now.

Easy. Map all possible bishop locations against column 0, in both directions (up and down). And then sum it up

Probably something like this. I'm on my phone though
t. Amazon engineer
const get = (obj, key, def) = Object.prototype. hasOwnProperty.call(obj, key) ? obj[key] : def

const bishops = b => {
let count = 0
const down = {}
const up = {}

const work = (obj, value) = {
const before = get(obj, value, 0);

count += before;
obj[value] = before + 1;
}

for (const bishop of b) {
work(down, bishop.y - bishop.x)
work(up, bishop.y + bishop.x)
}

return count;
}

let sparse = x => +`0b${`0${x.toString(2)}`.replace(/011.*/, m => '1'.padEnd(m.length, '0'))}`
I'm sorry, but I believe this works and is technically O(1) albeit with a large constant factor due to all the string shenanigans.

in what world is this O(1)

It's independent of the size of the input

There's two non trivial cases to look at.
The trivial case is it's already sparse
Loop through the bits in O(n) with n being bits. Find the first offending match by comparing with previous
If the first offending character is 11. Left shift everything flip the second 1 and start alternating 0 and 1
If the first offending character is 00. Flip the second 0 and start alternating 0 and 1

Solvable with one loop through bits. Really O(ln(n)) with the size of n

But it's not? Even just the x.toString(2) by itself grows as x gets larger.

I have to mention that loop with most significant digit first

Does it? I'm benchmarking toString(2) with successively larger values and it doesn't seem to be scaling by anything unless it's like, M*log(N) with an M that dwarfs log(N) for any input I can produce.

Either way, The whole thing is certainly better than O(n*log(n))

And wait. Sparse is "no adjacent 1s". Even easier

>loop through bits (most significant first)
>find offending 11
>backtrack to find nearest 00
>flip the lower significant bit
>set everything to the right of it to 0

Isn't it still O(Log N) though?

I think I've found the solution: my algorithm provides the "upper bound", and all we have to do is to flip the 1's to 0's until X < N or the most significant bit is reached. If X < N, undo the last flip.

Example: N = 12 or 1100. Start from 1, then 2, 5, 10, 21. 21 is 10101.
Flip the first 1, now X = 10100 = 20. X > N
Flip the second 1, now X = 10000 = 16. Most significant bit reached. The answer is 16

There are approximately Log N bits so the flipping process is O(Log N) as well

I solved this in a pretty retarded way, but basically you notice the pattern for the next sparse number as being related to the largest incomplete power of 2 that is in its sum.

For example: 37 = 32 + 5, 8-5 = 3, so the next sparse number will be 37 + 3 = 40.

I just kinda lucked out on getting that relationship because I was too stupid to figure out how to do bitwise operations to solve this problem :/

On the plus side at least it isn't O(n^2) for once...

Attached: SmogonKeys.jpg (300x300, 16K)

>For example: 37 = 32 + 5, 8-5 = 3, so the next sparse number will be 37 + 3 = 40.
I hate to tell you man, but 37 is already sparse

fug you're right

it sort of worked but it looks like its back to the drawing board

thanks, this brainlet still has a long ways to go...

actually it turns out it does work, I just made an error in my sparse-checking subroutine, thank you for the remark user.

a better explanation: Check if the number is sparse, if it is then return the number. If it is not, then subtract the largest power of two from it and subtract the resulting number from the largest power of two greater than it. Then the next smallest sparse integer greater will be the resulting number + the original number. e.g.

38 = 32 + 6, 8 - 6 = 2, so next smallest sparse integer will be 38 + 2 = 40.

For the record, this is wrong. I've fixed it, but I don't want to share the fixed one.

I think I got it
def sparse(n):
i = 0
n_sh = n
last_11 = -1
next_00 = -1

while n_sh:
if n_sh & 3 == 3:
last_11 = i
next_00 = -1

if n_sh & 3 == 0 and next_00 == -1:
next_00 = i

i += 1
n_sh >>= 1

if last_11 == -1:
return n

if next_00 == -1:
next_00 = i

n |= (1

Where can I subscribe to this mailing list?

I think I got it.
"downwards" diagonals are identified by x - y, while "upwards" diagonals are identified by x + y.

I noticed that the number of pairs in a diagonal is "n choose 2" = n*(n-1)/2 = 1+2+3+...+n, so upon adding a bishop to a diagonal I just add the number of bishops already there to the total counter.

I first wrote up the solution in C but then realized it was O(M) memory for sparse arrays. The python is O(M) memory for dense arrays, O(N) memory for sparse arrays, and O(N) time where N is the number of bishops.

Attached: bishops.png (1536x906, 810K)

s/m + x - y - 1/x - y/
Got way too clever with my diagonal labelling.

Attached: bishops.png (1535x904, 808K)

>looping through the bits
wow you dumb fucks here a O(1) solution i just shat out
im this poster for reference #include
#include
#include

int main(int argc, char *argv[])
{
uint32_t a = 0xAAAAAAAA;
uint32_t b = 0x55555555;
int n = atoi(argv[1]);

if(!((n & a) ^ n) || !((n & b) ^ n))
printf("Number is sparse\n");
else
printf("Number is not sparse\n");

return 0;
}


when you input 10, it prints out is sparse,
when you input 9, it prints is not sparse

Attached: 1552144012237.png (1000x1000, 419K)

Did you even read the problem you absolute retard?

First I'd order them lexicographically, i.e. bishop at (0,0) comes before bishop (0,1) comes before bishop at (1,1) etc. .

When I have the ordered list of bishops, for every bishop in the list I check the *downward* diagonals, i.e. for bishop at (i,j) I check all the boxes {(i+1,j+1), (i+1,j+2), ...} -that's the right downward diagonal- and all the boxes of the form {(i+1,j-1), (i+2,j-2),...} the left downwards diagonal.

I make a list of those diagonals, and check if there's any overlap with the initial list of bishops. Any overlap represents a hit.

So, for every bishop, make the list of right downward and left downward diagonals, and compare with the original list of bishops. Add all the numbers of overlaps (hits). And that's the answer.


The idea to sort the bishops lexicographically is what allows us to only take downward diagonals. Because we want to count each hit only once (i.e. if we counted bishop b1 taking bishop b2, we can't count b2 taking b1 as another hit), if we take a lexicographic order (i.e. ordering the bishops from the top leftmost one, to the bottom rightmost one) allows us to take only downward diagonals. This way we ensure we don't get any double counted hits - for a double counted hit we have to at the first count a bishop has to move, say, downwards and at the second count upwards. But we disallowed upwards moves.

since there doesn't seem to be a proper solution here yet (at least no comprehensible one)
from random import randint


def isSparse(n):
last = 0
while n:
curr = n % 2
if curr and last:
return 0
last = curr
n //= 2
return 1


# naive O(n*log(n)) algorithm
def findNextSparse(n):
while 1:
if isSparse(n):
return n
n += 1


# iterate from least to highest significant bits
# when we encounter 2 adjacent ones (index i, i+1 )we have to add
# the lower power of two (2**i) since this eliminates the pair of
# adjacent ones.
# then we unset all lower bits so the number is minimal
# this has to be done until we reach the hsb, since there may be multiple pairs
# and we may also _create_ new pairs
# runtime is O(log(n))
def findNextSparseFast(n):
stencil = 3
power = 0
for _ in range(n.bit_length() + 1):
if stencil & n == stencil:
n += 2**power
n >>= power
n

>since there doesn't seem to be a proper solution here
>implying thats comprehensible

i thought i did

anyway here we go again;
#include
#include
#include

int hsetbit(uint32_t p)
/* find the highest set bit in a binary search like fashion
*/
{
uint32_t i = 16;
uint32_t high = 32, low = 0;
uint32_t r;
for(;;)
{
r = p >> i;
if(r == 1)
break;
else if(r > 0)
{
low = i;
i = (i + high) / 2;
}
else if(r == 0)
{
high = i;
i = (i + low) / 2;
}
}
return i;
}

int main(int argc, char *argv[])
{
uint32_t mx = 0xFFFFFFFF;
uint32_t a = 0xAAAAAAAA;
uint32_t b = 0x55555555;
int n = atoi(argv[1]);

if(!((n & a) ^ n) || !((n & b) ^ n))
{
printf("The smallest larger sparse number is %d\n", n);
return 0;
}

int set = hsetbit(n);
int new = mx >> (32 - (set + 2) );

if(new & a > n)
printf("The smallest larger sparse number is %d\n", new & b);
else
printf("The smallest larger sparse number is %d\n", new & a);

return 0;
}


idk what the time complexity is here, i believe its O(1) although the binary search there is O(logn) so its probably that

a short explanation for this solution;
all it does is checks to see if the number entered is sparse
if so it returns the number
if not then it makes an and mask the size of the entered number shifted to the left by 1
for example if the number is 9 which in binary is
1001
it creates a mask in the form of
11111
so its 1 bit larger
then it ands it with both 101010101 and 010101010
and checks which returns a number closer to n and then returns it

there is actually an error in the code i just realized
in if(new & a > n)
it doesnt do the operations in the order i envisioned and should be changed to
if((new & a) > n)

Do people actually think this is easy?
Fucking hell. I have no chance at a FAANG.

sparse means no repeated ones. zeros may repeat.
can you reproduce these test cases?
15 -> 1111 has solution 16 -> 10000
19 ->10011 has solution 20 ->10100
11 -> 1011 has solution 16 ->10000

>zeros may repeat
fuck i misunderstood what spare meant

fibbinary numbers seeems to be their "official" name.
oeis.org/A003714

A tuple is an aggregated datatype with one or more elements.
A pair is always two elements, a tuple can have four or whatever the hell you want.