I learned a valuable lesson today. Specifically, one billion is a huge number that should never be considered as an acceptable dataset size to do a sequential search on. Possibly the most retarded thing I’ve done this week was decide that it would be a good idea to look through a billion pieces of data one by one to solve a computer program. To get an idea for how infernally stupid that was, pick a number from zero to 1 billion. I am now going to guess your number in the same fashion as my retarded program.
Is it one?
Is it two?
Is it three?
…
Is it four million, three hundred-twenty-four thousand, six hundred and
five?
…
You see the idiocy? Now consider the smart way. I bet you I can guess your number in 30 questions or less. (Pretend we picked 103,456)
Is it greater or less than…
1: 500 million? less
2: 250 million? less
3: 125 million? less
4: 62,500,000? less
5: 31,250,000? less
6: 15,625,000? less
7: 7,812,500? less
8: 3,906,250? less
9: 1,953,125? less
10: 976,562? less
11: 488,281? less
12: 244,140? less
13: 122,070? less
14: 61,035? greater
15: 91,553? greater
16: 106,812? less
17: 99,182? greater
18: 102,966? greater
19: 104,904? less
20: 103,934? less
21: 103,449? greater
22: 103,692? less
23: 103,570? less
24: 103,509? less
25: 103,479? less
26: 103,464? less
27: 103,456? equal
That definitely seems to be an improvement. My program agreed. Last time, it ran for 15 minutes before I gave up and killed it, far from an answer. This time it took 1.042 seconds.
Lesson Learned.