First we find a k that makes f(k) < 0
Then we can find n by a simple binary search.
1. Finding k
Make k0 = some small constant for example 1
Check if f(k0) < 0, if not, double the k0's value and repeat.
the final k0 is k.
if f(0) < 0, vice visa, we can find a k that makes f(k) > 0 where k < 0 in the same way.
2. Finding n
since now f(k) * f(0) < 0, we can perform a single binary search to find n.
Time complexity:
Step 1 takes O(log n) time: finding k takes O(log k) time, and since k > n and k / 2 < n, finding the k takes O(log n) time too.
Step2 takes O(log n) time. It's trivial. The proof is left for practicing.
Phase 1 can go faster under certain conditions. For example, if there is a range for n (Such as something a double or long can hold), we can completely skip phase 1. Or you can make that phase faster by changing the k0 in a faster, different way such as i^i in step i. For example, 1, 4, 27, 256... In this case the k0 grows much faster speed.
However it doesn't change the time complexity in phase 2, so the total time complexity is still O(log n).
To reduce the time complexity even more you need a complete different algorithm than binary search. For example, Newton's method to find the root of the equation in O(log n * F(n)) time where F(n) is the time complexity to calculate f'(n). The time complexity seems the same but the constant associated with it is much smaller.