well, I mean, when you declare k as an integer, in most programming language, you can add a float to k, say "k=k+f ;" where f is a float or double, let's say f=0.1 for good. The result in math should be k=1.1, but here k is an integer, so the system will apply a convert action like "k=(int)(k+f)", so when abs(f)<1, k keep unchanged.
It is only meaningful in practice when you run this program in some actual language, like in Java. If you mean in pure theory discussion, there is no such problem, n<100 will not cause any trouble.