Let's say A keep tossing a fair coin, until he get 2 consecutive heads, define X to be the number of tosses for this process; B keep tossing another fair coin, until he get 3 consecutive heads, define Y to be the number of the tosses for this process. 1) Calculate P{X>Y} 2) What's the expected value of X 3) What's the expected value of Y This is probably the hardest puzzle I have ever put on my blog. Hence, I will post its solution in the post directly rather than on comment. Solution: 1) (Solved by me finally after 13 months :)) Make a state diagram. Let the state be (a,b) where a is the number of consecutive heads streak "A" is on currently and b is the number of consecutive heads streak "B" is on currently. So, (0,3) (1,3) are final accepted states and (2,0) (2,1) (2,2) (2,3) are failure states. If you get tails, your contribution to the state reaches to "0" f(State) = P(X>Y | "State" configuration initially)

O(N^2) is trivial.

ReplyDeleteI propose a O(N) solution.

First look at array having 1 in place of 1 & -1 in place of 0. So our objective is to find max sequence whose sum is 0.

Now lets keep a Hashmap along which stores pairs of form ( S, i) which says that prefix [1...i] of sequence has a sum of S. This map is indexed in S.

Also keep a rolling variable for sum seen so far. Suppose you reach j & sum of [1..j] is S. You see in map if there is some entry for S. If there is say it was i ( i.e prefix [1...i] had sum of S ). Then [i+1...j] is 0 sum seq. So update your ans with this and move ahead.

If S was not in map, put pair ( S, j) in map & keep moving.

You take exactly N steps in moving & at each step you do constant work. So this is O(N) solution. If you don't want to use HashMap, use a balanced tree instead to get time of O(N log N)

Correct solution. But I don't see any reason to use hash map.

ReplyDeleteYou are saying that checking duplicate in a hashmap is O(1). To be precise, if the number of buckets you have is k, then your algorithm is O(n)*O(n/k)

Use an O(n) size array. So, you can have the algorithm in O(n) space and O(n) time.

In the algorithm using a tree instead of hashmap, you take O(n) space but O(nlog n) time.

Infact, better than tree algorithm would be stable inplace sorting. This will take O(1) space and O(nlog n) time. :)

Cheers! Hoping all my statements are correct. Check! :)

Indeed. You are correct , as always :)

ReplyDeleteI proposed HashMaps because most modern languages today provide easy to use O(1) average access time maps ( and that I am strongly biased to short implementable solutions :D )

Of course in theory as you correctly pointed, this is not the case. We can certainly use O(N) size array to index into sums as range of possible sums is only form [-N,N].

What I don't get is how to solve problem with O(1) extra storage using sorting. What I think is - you propose to store pairs of form ( S, i) for all prefixes [1..i] of seq and sort on first key breaking ties by second. Then all candidate solutions appear as consecutive here, so we can find the solution in a single scan.

But this is O(N) extra storage.

Aah. I missed that. Since we need to store the pairs (S,i), the algorithm would still be O(n) space. Thanks.

ReplyDelete