+1 vote
public class test2 {
    public static void main(String []args){
          long start = System.nanoTime();
          double j = 2;
          double n = 100000000;
          int count1 = 0;
          int count3 = 0;
          while (j < n) {
            double k = 2;
            int count2 = 0;
            count1++;
            //System.out.println("first while loop for " + count1 + " times");
            while (k < n) {
              k = k * k;
              count2++;
              count3++;
            }
            j += j/2;
            System.out.println("second while loop for " + count2 + " times");
        }
        System.out.println("total loop for " + count3 + " times");
        long end = System.nanoTime();
        System.out.println("TimeCost: " + (end - start) + " NS");
      }
}

This is the program I wrote to test its executive time. I add System.nanoTime() to record the time. But when the times are not very large, like 50 or 200 times, by changing the value of n, the executive time of 200 times sometimes is shorter than that of 50 times. However, when the times are added to like 10000000, it obviously takes longer time. I wonder if there is another way to measure the executive time of program more accurately.
in Asymptotic Analysis by (180 points)
reshown by

3 Answers

+1 vote
From my previous experience doing similar projects, I have found that this is quite common for small n values because of reasons such as process scheduling. It might have happen that on the n=50 execution your process was set to wait. Furthermore, I think you shouldn't worry about this for the small numbers, because we are going "asymptotically" (towards huge numbers), so they are quite insignificant in the large scale. In general, you should see a pretty decent approximation to the theoretical model.
by (128 points)
reshown by
+1 vote

The actual executing time of a program might various on many aspects, such as the percents of CPU resources provided, the thread or progress schedule of the system. Also, there would be many other operations, such as system input/output, space allocation, that would be affected by the task distribution of your PC, which makes the execution time runs "randomly". That is to say, when n is really small, there would be many other unpredicted aspects affecting the executing time. However, when we are researching on algorithms, we do not care these "extra executing time", but "how the executing time grows when the data scale grows".

For example, a program runs 5000 + n

would be better than a program runs 200 + n2

Though the second program takes less time when n is small.

by (212 points)
edited by
Thank u very much!
0 votes
Since you know the bigger the better, you can be as big as possible. Maybe you can try a few small numbers and then find the law. Hahaha
by AlgoMeister (964 points)

Related questions

The Book: Analysis and Design of Algorithms | Presentations on Slideshare | Lecture Notes, etc
...