Prove that you belong.
Retard Test
Other urls found in this thread:
github.com
twitter.com
kill yourself go ask your TA for homework help
Beep boop howdy partner! I'm the cowboy robot!
What base log?
>[2]
>the median is 2.0
wat
1.75
user, if you don't know that 2 = 2.0 then you're clearly too stupid to even attempt to take the retard test.
OP is a faggot, prove that you belong first faggot
>2019.0
>being too stupid to take the retard test
Thanks. Don't contact us, we'll call you.
Doesn't matter for big O notation. A different base would just result in a constant factor, which is disregarded in big O notation.
13
import stackOverflow.answer
I'm not doing your homework kys
The fact they want a log running time and it's an array with constant time look up makes it a fucking joke. If you can search for a number in log(n) time it's already obvious.
Isn't this just merge sort + do median?
binary search, you basically gave it away by saying the runtime is log(m+n)
nums1 = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
nums2 = [11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]
Tell me how you would find the median.
You only have to do half of the merge sort, and you don't have to store the result. Just locate the middle element(s) and average them if n+m is even.
>You only have to do half of the merge sort
So O(n+m). Don't call us.
Okay I'm retarded. I looked it up and there is no chance I would have deduced that. O(m+n) is all they would have gotten from me.
nested binary search?
pick middle of first array
bisect second array for it.
if sum of indices overshot repeat with first half of array1 else
with second half
excluding lists where the ranges don't intersect which is trivial and ignoring that some would have even or odd lengths which is annoying
use std::equal_range to binary search for an index in the larger list where the sum of the indexes is equal to (a.length() + b.length() / 2);
excluding lists where the ranges don't intersect which is trivial and ignoring that some would have even or odd lengths which is annoying
use std::equal_range to binary search for an index in the larger list where the sum of the indexes is equal to (a.length() + b.length()) / 2;
I could probably be able to solve this but I'd first have to google what is a median is. Who the fuck remembers shit like that?
How do you not know what a median is.
sum all the numbers in both arrays then divide by the size of both arrays
>I don't know what a median is
I know it has something to do with the middle
median is a diablo 2 mod what are you talking about
It's been over 6 hours and not a SINGLE person has coded a solution.
Unbelievable.
None of you will ever make it at a fortune 100 tech company.
Nums1[nums1.len()-1]
There's an O(log(m+n)) solution? Now I know why I didn't get that one job lol
Wow, what a useless exercise!
they should have prompted you if there was a faster solution
Don't call us, we'll call you.
I wouldn't work for some faggy company who employs arbitrary and useless examples during interviewing rather than a somewhat real world task that they want done.
find the median income of these two sets of clients in log(n + m ) time
Honestly I think that one interviewer and I just didn't mesh. Even when I explained to him why the solution I gave was O(n+m) he kept making me re-explain. Perhaps he was trying to hint that there was a faster solution, perhaps I just sucked at explaining things to him, but I'm not sure.
This is why I prefer all-day interviews with 5-6 people, so one off connection doesn't fuck with the whole thing, but I only had that one guy scheduled.
They reached out to me like a couple months later again to try to interview me again and even now periodically try to wine and dine me, but some of my investments paid off so I'm doing my own thing.
>I wouldn't work for Google, Amazon, IBM, Microsoft, Apple, etc
I know you wouldn't work for those companies. You're not smart enough to.
I am sure google's weedout questions would be more advanced than finding a median over sets of numbers
Not really. Google's most famous weedout question was "invert a binary tree" which is much easier than "find the median of two sorted arrays of numbers in O(log(n+m)) time." Their other questions are all fairly similar, you only have 45 minutes and they usually want to talk about stuff.
damn dog that's
wow
would shit myself and die once they asked me to write the non-recursive version
I mean they're obviously harder than "invert a binary tree" and I imagine that Macports dude would've gotten a followup question from that interviewer before the next one showed if he didn't sperg out.
But when I hear people talk about the google whiteboarding process being too hard I am always dumbfounded. Sure sometimes you fuck up and that sucks, but that's on you and it could happen at any company. Their interviews are fair.
I have never been asked to actually write out a non-recursive version of a function at a whiteboard interview. I usually write the recursive one, because that's what they're looking for, and then I say something to the effect of, "In real life, you'd run out of memory, so you'd do it iteratively" and the interviewers agree and then you move on.
got sent a timed hackerrank evaluation yesterday, spend most of the time writing the elegant recursive solution and it failed some of the tests via stack overflow
I've never done one of those, but whiteboard questions are unlikely to be run through unit tests for obvious reasons.
What tests did you fail?
when you submit your code it runs it with a dozen hidden inputs
some of the inputs failed even though the algo was sound because of stack overflow or timing out; which means i technically never made a passing solution
ended up nearly finishing the iterative version and commenting it out right as the timer hit zero with an explanation
>a dozen hidden inputs
Honestly that's a realistic representation of production code.
median income of clients herp derp thats useful
>what are Red-Black trees
retard
calm down. It's the only one Rajesh taught me at the bootcamp
I'm assuming we already have the lengths of the array stored, because otherwise it'll be O(m+n) just to figure out the lengths of the arrays, and the indices would be useless and whatnot. We also know that we can find the median of any array in O(1) because it's just half the length (and maybe averaging some shit but who cares you know what I mean).
I'm also not gonna write code because I'm drunk.
Let's look at this case:
We know the following result will be 10
1 2 9 10 11 12 13 => 10 med1
3 5 6 20 20 20 => 13 med2
combined 1 2 3 5 6 9 10 11 12 13 20 20 20 => target_med = 10
But how do we get here??
(med1+med2)/2 = 11.5 (the guessed_median)
So we binary search each array O(logn). Stars are where the guessed_median is going to be in each case.
1 2 9 10 11 * 12 13
3 5 6 * 20 20 20
We know the length of each array subset, we can divide them up and know their lengths because we already know their lengths based on indexes, O(1) operation.
5, 2
3, 3
Add those together vertically
8, 5
8 is how many numbers we have under the our guessed_median, 5 is how many numbers we have above.
We know we're a bit off from the true median, (which we know is 10, while we're at 11.5) because 8 != 5. (or 5-1 because of off-by-ones with evens or odds or whatever the fuck)
We're gonna keep track of those numbers with some more variables that I don't feel like writing out because they're either gonna be global or passed through to a global function but you know.
So we call our recursive function on the lower halves, because 8 > 5 and 8 is the "lower" half. Each operation in our recursive function will be O(1) and it will be working with things half the size, so the total thing will be O(log(n)) but we never need to do the original O(log(n)) search.
part 1/2
part 2/2
So we look and find that the median of the lower half of the top array [1 2 9 10 11] is 9. O(1)
And bottom [3 5 6] is 5. O(1)
Average those together to get 7.
So now we have
1 2 * 9 10 11
3 5 6 *
2, 3
+3, 0
-------
5, 3
But we know we have a 5 above from before so we're at 5, 8.
But now our search space is smaller. We recurse again with [9 10 11] and [] and our median is 10 and N/A, bringing us to 10, with 1 below, 1 above, subtract 1 from above because it landed right on the number or do whatever math and you have 3 below 10, 3 above 10, 3 below N/A and 3 above N/A, so your numbers are 6, 6 (only adds up to 12 now because we got rid of the spot 10 is in) and we know that we have 10 as our median because it has the same number as before. Probably could've done this without the initial search but whatever.
I have some typos in here and wrote the wrong terms for things a couple times but hopefully you guys get what I'm going for.
It isn't necessary to guess a pivot. Your worst case runtime is suboptimal.
Yeah that's what I figured and noted later on, but since I already wrote it out I didn't wanna go back.
>hard coding a "edge case"
nums1 = [1, 2, 3, 4, 5, 6, 7, 8, 10, 11]
nums2 = [9, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]
Now tell me how you would find the median.
I'm not convinced the person who made this knows what "median" means
I'm not convinced you know what it means.
Neither am I, user...
What would be the median in the 2 given examples, according to you?
Can't be done without merging which is already O(n), bait.
>Can't be done without merging
Literal brainlet.
Median can be calculated as the mean of the two middle numbers, but usually it is just the middle value in a sorted array.
OP just wants to know how sort.
Empty hook.
I looked up median, I now know it's just the average of the two. I originally thought you had to just pick one, rounding up or down.
Genuine question, I do electronics not programming so I'm not privy to all this algorithm stuff, what sort would I use practically? The algorithms themselves are stupid easy (bubble sort, merge sort), but I have no idea how big O notation (are there other notations?) work. So if I wanted to do this challenge, would it work to, for an odd length list, find the biggest and smallest numbers, then delete them until you have one value? I know that'd be slow for all but the shortest lists, so it'd probably be faster for long lists to sort and then just pick the middle value. Is it always fastest to just sort and then do median, or is there some predictable threshold where it becomes faster to just do the brute force method? I mean, I'm sure there is, but how would I go about finding it? I am not a CS man, guy
big o notation is actually really fucking easy to understand
basically it's a theoretical formula (that ignores a shitload of real world stuff) for calculating how optimized a function is
i forget what the exact specifics are, it didn't really leave that much of an impression, but from what i remember it's trivial
I think that's what confused me. I think of performance in terms of processor instructions, and language implementation obscures that, so I didn't get how you could predict that with a notation. Even with equivalent code between two languages, they could handle loops differently. So is the notation just how many "instructions" (functions, not processor instructions) are run? Like, something as basic as list[i] = list[i-1]? Where do you draw the line for what counts as an instruction in pseudocode? If I had a function that ran as
temp = list[i]
list[i] = list[i-1]
list[i-1] = temp
I assume that wouldn't count as one instruction. Why wouldn't i-1 count as an instruction? Is there a list of rudimentary operations that count as one "instruction"? How do you even quantify that when all these functions are just moving memory around, and one instruction may have 10 times the processor operations as another? Is big O notation actually useful for anything? Could you use big O notation for an assembly program?
>Is big O notation actually useful for anything?
the only thing i can think of where it'd have an actual use is academia
stuff like designing generic algorithms that can later be optimized on a per-implementation and per-platform basis, like encoders, cryptography, etc.
no big O is very important, and it's not how many instructions per item at all
it's how the time complexity grows with n
so for example, if you double the number of elements in the list, if you had an O(n + m) algo it would take twice as much time but if you had an log(n + m) it would only take a little bit more time
furthermore this means if you wrote your shitty n + m impementation in C, and someone else wrote their logn(n + m) one in javascript
theres would quickly be faster than yours once you raised the number of elements far enough
O notation is only useful for big inputs.
thanks. I'm still somewhat confused by how it can accurately track execution time, even in theory, but that does put it into context
if you have to write a function to check if a number is in a sorted list or not
function A checks each item in a row until you find it
function B uses a binary search to find it
function A has to check all the items (n items) to be sure if it's in the list or not
function B only has to check log(n) items to absolutely know for sure that the list does or doesn't contain the value
it doesn't matter if in one language comparison takes 0.1 seconds, or in another language comparison takes 10000 seconds, there will always be a point where the list is long enough that 0.1 x n is a longer time than 10000 x log(0.1)
big O tracks the average growth in execution time depending on how large the input is
it's usually safe to naively guess that the binary search will be proportionally faster and faster than a 'check every item' search .
you aren't calculating the execution time, it's pretty much relative to 'if you'd written the other algoritm with the same data set with the same language on the same hardware'
algorithms can be analyzed on how many steps they take to complete on average, depending on what the input is, this is what big O analyzes
most problems are just versions of other problems where you've already memorized what the best algo is for it, this is pretty much what a recruiter is testing for this question
where it says 'should be log(n + m) complexity' and you also see that you're searching for something it's two massive hints that you should be thinking about binary search
well i'm feeling like a retard for sure
#include
#include
#define m 2
#define n 2
int main(){
double nums1[m] = {1,2};
double nums2[n] = {3,4};
int size = m + n;
double half = size / 2;
double arr[size];
int i = 0,j = 0,k = 0;
double cur;
//get to 1 before median
while(i++ < half){
if(nums1[j]
Do your own homework
oops, totally doesn't work, ha
It can't tell accurately. You can do a test run with a small inputs and then using O notation you can extrapolate that to big inputs, but it won't tell you anything by itself.
>The answer is in the example but it's so simplified you feel stupid reasoning through it
Who is the slow person now, OP?
>example 2
>median not in the set
Is this the actual retard test?
The log time is throwing me off. I'm sure i could do it in n time. This one's got my noggin a joggin.
if nums1 == [1, 3] and nums2 == [2]: return 2.0
if nums1 == [1, 2] and nums2 == [3, 4]: return 2.5
else: # THIS SHOULD BE UNREACHABLE
return -1
okay now it works, just had some bad casting earlier, though I think it's only O((m+n)/2) for now. Definitely can be faster with fewer, smarter checks.
#include
#include
#define m 2
#define n 1
int main(){
double nums1[m] = {1,2};
double nums2[n] = {3};
int size = m + n;
double half = (double)size / 2;
double arr[size];
int i = 0,j = 0,k = 0;
double cur;
//get to 1 before median
while(i++ < (int)half){
if(nums1[j]
that's bretty funny
They want you to avoid merging the entire dataset, realistically tho I'm sure a compiler could optimize a presorted merge. The parameters imply reusability to the extent it needs to be as fast as possible, but I just don't see a scenario where you would want to give the median of 2 given arrays where you wouldn't also need to account for an arbitrary number of input arrays.
Actually the scenario of making algorithms to prove oneself is a shortcut to a generic sense of self zzzzzzz NPCism
Ye I cannot understand why from a data processing perspective you'd value the average of 2 medians. Is it a set or are they individually significant values (completely unattached to anything)
Also in what world are sets pre sorted
If m+n is odd output the number, if even output the average of the numbers in the middle.
Big if true
It's a set of integers. Why wouldn't you just return a pair? We're processing data we can't just throw 2 pieces in a blender and call it computer science.
what do you return if the median is one number not a pair
std::optional>
Return one number
11.5 or 11.18 maybe. Idk neve
r went to college or tried learning yet.
My stupid cheat. Not sure how correct or wrong.
Merge sort is nlog(n), fucking idiot. Not actually sorting anything makes it log(n), so log(n+m).
>Google, Amazon, IBM, Microsoft, Apple
>need/have smart people
Stop being retarded.