- Sep 6, 2000
- 156
- 0
- 0
I am wondering why we use the binary representation of our input for the input size that will be used in the algorithm analysis. I understand that since we represent everything in binary, and that would be a reason for this practice. But it seems to complicate things and in my mind give less accurate results.
For example primality testing. If the binary string is 8 digits long there is a huge range of numbers, but the complexity will be the same for all of those numbers.
I guess I'm just a little confused. If someone can offer up an explanation that would be great.
For example primality testing. If the binary string is 8 digits long there is a huge range of numbers, but the complexity will be the same for all of those numbers.
I guess I'm just a little confused. If someone can offer up an explanation that would be great.