For problem #2, consider that an interviewer may lead you a bit--IE, not being worried as much about specifically having every bit of background for the problem, as much as being able to work through it. While I simply don't have the background necessary to prove optimality of the solution, I probably would have gotten very close to the solution fairly quickly if I had made the connection to completely disjoint cycles very early on. With something close to the proper solution in mind, even Jython is fast enough to perform enough randomized iterations to get within 0.1% in under a minute, on an old COre at 1.83GHz (I never split the work up across cores; also, Jython was no faster than CPython, but the random number generator was clearly more fair).
Yeah and that would've been fine. Unless the problem is very easy (and more meant to see how you code), it's totally ok to have to ask some questions, get help, and otherwise back & forth w/the interviewer. For this problem, it's also probably ok to not know how to analyze the result. You shoudl be able to say that it's the probability that there are NO cycles of length > N/2 even if you can't compute it. And then the interviewer might help you through that if they care: how many total combinations? in a cycle of length M, how many ways to arrange a cycle? how many ways teh arrange the remaining N-M? how many ways to choose M things out of N? Each of those you should be able to do individually since it's like high school level discrete math.
For optimality, I don't even know how to prove optimality. But if you can pass any info you want, you still can't do better than 50%. So in most scenarios, 30% will be good enough even if you could manage slightly better.
for problem 2: does the reciever need to be able to decypher alice's message as well? Or does he simply need to get your message with as many bits as possible?
The receiver does not need to be able to decypher alice's message, but this is not relevant. If Alice and Bob agreed on a convention beforehand (e.g., first 30 bits are the msg, last bit is for parity & Alice sets parity so that the sum of all 31 bits is even), Bob can detect whether you have changed something. But he won't have enough info to figure out what you changed. Or if they're willing to more substantially reduce the number of bits transmitted, they can select an error correcting code valid for hamming distance of D--such codes can recover the original message even after at most D bits are flipped (here D = 1).
So Bob being able to understand Alice's message would entirely be a function of the convention they use for their 31 bits; it has no bearing on how you will transmit info to Bob by flipping 1 bit. You and Bob need to decide a plan for him to understand as many bits of info from you as possible.
eLui,
I think you made assumptions about what I was saying. That or I wasn't clear.
I'm shocked that any company would expect people to take 5 half days off of their current job to do 5 interviews. Are these 5 interviews all in one day?
I'd be more than willing to entertain hypotheticals just to show my thought process. But if people expect me to start taking about big O notation type stuff and statistics, forget it. All I am saying is that there are alot of people with alot of experience that are very good that have not used that knowledge in a long time and most of it is forgotten. If it is absolutely required for a job, fine. But if people are pulling these questions out for no valid reason, it is stupidity. You can filter people out just as easily by asking stupid things like what is a good example of multiple inheritance. I'd bet that 50% of interviewees would actually try to come up with good reasons when the answer is that there are no good answers.
Yeah I went a bit overboard responding to your post. Claiming that the problem is silly to ask of anyone not doing stats (even though you didn't/don't know the solution) rubbed me the wrong way. That's really the main thing I wanted to respond to--finding the best algorithm requires 0 stats; it's all classical CS (
). And as I noted above, the analysis requires only a very basic understanding of how to count permutations & combinations.
Anyway, the most common thing I saw were 4-5 1 hour interviews. These were done during a single day, at the company's offices. The sessions are typically back to back with some time for a break or lunch in the middle. My shortest interviews were only ~3 hrs and my longest were ~8 hrs. For companies I really cared about, I would hang around after the interview was done to meet more engineers, talk to people, and just hang out. Interviews were as much about companies testing me as they were about me testing the companies. Before that, they may also have 1 or 2 1hr phone interviews that you can schedule at your convenience. I would guess that for higher positions (like an exec at google), the interview process will probably be more drawn out than "new grad" hires.
I would say that understanding Big-O is absolutely critical to any dev position (based around algorithm development) in silicon valley. Yeah if you're applying to a front-end position or doing purely client-facing stuff, it probably doesn't matter/you won't get tested on that. But those aren't exactly [classical] computer science either.
I mean when you come up with an algorithm, you should be able to tell me if its running time is constant, logarithmic, linear, polynomial, or exponential. There are of course other possibilities but I think those are the biggies. I mean if you were asked to implement a function returning the n-th fibonacci sequence and you gave this:
int fib(int n){
if(n == 0) return 1;
if(n == 1) return 1;
return fib(n-1) + fib(n-2);
}
and thought that was a great solution, you'd be in a lot of trouble. And no, (tail) recursion is not what's wrong here. Or similarly if you knew the closed from expression (using floating point) and claimed that is "constant" time, I would be raising eyebrows there too.
Most of my interviewers asked me to design and code an algorithm (meant the algorithm would be pretty simple) or just design an algorithm. In all cases, performance was the first thing asked b/c the O(n^3) naive algorithm is not practical for n = billions, but the O(n) optimal is.
I also wouldn't be all that happy with someone who says multiple inheritance is always bad. It is a part of the language, and in the right circumstances (+ with a well thought out style guide to ensure things don't go crazy), maybe it's also the best solution. I mean in Java, you can have as many interfaces as you want; in C++, there's no interface construct and multiple inheritance would be the only solution.
But if I were testing OO design (e.g., design an elevator is a popular one), and the person was using inheritance for is-a and has-a relations (so maybe the elevator inherits from the button panel, control unit, motor, etc), that would be raising red flags all over the place. Definitely not a place where multiple inheritance (or any inheritance) should be used.