Examples on Finding Time Complexity
Example - 1: Complexity of this program is O(n2) n-square only. 1. scanf("%d", &n); >> O(1) because its just executes once for whole program 2. for(i=0;i<=m+n;i+=2) >> Now you can say that this is O(m+n) which is nothing but O(n') , here we assume that n' is quite large 3. Printf("%d\n", i-1); >> Again O(1) 4. for(j=m*n/100); j<=m*n;j++) >> Now this runs m*n times that makes it O(n*n) O(n-square) 5. Printf("%d \n", j); >> O(1) So, now : 2*O(1)+ O(n) + O(n*n) ~= O(n*n) so the complexity for this program is O(n-square)
Example 2:
i=0 //first row if(board[i][0] == win && board[i][1] == win && board[i][2] == win) return win; //second row if(board[i+1][0] == win && board[i+1][1] == win && board[i+1][2] == win) return win; //third row if(board[i+2][0] == win && board[i+2][1] == win && board[i+2][2] == win) return win; //first col if(board[0][i] == win && board[1][i] == win && board[1][i] == win) return win; //second col if(board[0][i+1] == win && board[1][i+1] == win && board[2][i+1] == win)
return win; //third col if(board[0][i+2] == win && board[1][i+2] == win && board[2][i+2] == win) return win; //first diag if(board[i][i] == win && board[i+1][i+1] == win && board[i+2][i+2] == win) return win; //second diag if(board[i+2][i] == win && board[i+1][i+1] == win && board[i][i+2] == win) return win;
This is obviously a trap question to see if you understood the concept of time complexity It will run in constant time i.e O(1) assuming board[M][N] to be a two dimensional array. O(1) - no iterations no recursion. As the other answers suggest, it is O(1). But this is not considered as good coding practice. You can use a loop to generalize it.
As you've shown it there, it is O(1) because there are no variable facets to it. It will always take the same time to execute. If you put it in a loop where i goes from 0 to n-1, then it will have O(n), i.e. linear complexity. If you double the size of n, you approximately double the execution time
Example 3: void fn(int n) { int d,i=0; int j=n; while(i<j) { i++; //d=i*i+(j-.25)*(j-.25)-n*n; //if(d>0) j--; } } We have O(n) complexity in that case Lets assume n=4; 1st Iteration: i=0; j=4 2nd iteration: i=1; j=4
3rd iteration: i=2; j=3 --> as d>0 satisfies here 4th iteration: i=3; j=2 So this is O(n) complexity
Example: 4
int Sum = 0; int j; for (j = 0; j < i; j++) Sum++; cout << Sum << endl; i--;
Here, we have int Sum = 0; taking one operation, for (j = 0; j < i; j++) Sum++; taking 3i + 2 operations, cout << Sum << endl; taking 1 operation (or 2 operations, depending on how you look at it, but the whole thing takes a constant amount of time anyway). Then i--; takes one operation. So that's a total of 1 + (3i + 2) + 1 + 1 = 3i + 5. So the time complexity is O(n) Example: 5 x=0; for(i=1;i< pow(2,N); i = 2*i + 2) { for(j=1;j<N ; j++) { x=x+j; } } if N = 7 then i(i/p) = 1 i<128 i (o/p) = 4 i(i/p) = 4 i<128 i (o/p) = 10 (2*4 +2) i(i/p) =10 i<128 i (o/p) = 22 (2*10+2) i(i/p) =22 i<128 i (o/p) = 46 (2*22+2) i(i/p) =46 i<128 i (o/p) = 94 (2*46+2) i(i/p) =94 i<128 i (o/p) = 190 (2*94+2)
so when we pass N = 7 in i loop we entering 6 times that is (N-1) N=7 [1] Time complexity of x=1 is 1 [2] The time complexity for i loop will be (N-1)+1 (here +1 is for when it finally exiting the i loop). [3] For j loop time complexity is (N-1)(N-1)+1 [4] Now for x=x+j time complexity is (N-1)(N-1) Total = 1 + (N-1)+1 + (N-1)(N-1)+1 + (N-1)(N-1) highest order is N^2 of (N-1)(N-1). so time complexity is O(N^2).
Example-6: i=n; while(i>=0) { x=x+2; i=i/2; } the complexity is O(log n)
Example: 7
1: for (int i = 0; i < N; i++) 2: { 3: if (arr[i] != invalidChar) 4: { 5: arr[ptr] = arr[i]; 6: ptr++; 7: } 8: }
int i=0; i<N; i++ ; if(arr[i]!=invalidChar) arr[ptr]=arr[i]; ptr++;
This will be executed only once. The time is actually calculated to i=0 and not the declaration. This will be executed N+1 times This will be executed N times This will be executed N times This will be executed N times (in worst case scenario) This will be executed N times (in worst case scenario)
Note: I considered the worst case scenario and am calculating the Worst Case Time Complexity for the above code So the number of operations required by this loop are
{1+(N+1)+N}+N+N+N = 5N+2
What is the worst-case complexity of the each of the following code fragments? 1. Two loops in a row:
2. 3. 4. 5. 6. 7. for (i = 0; i < N; i++) { sequence of statements } for (j = 0; j < M; j++) { sequence of statements }
How would the complexity change if the second loop went to N instead of M? 2. A nested loop followed by a non-nested loop:
3. 4. 5. 6. 7. 8. 9. 10. for (i = 0; i < N; i++) { for (j = 0; j < N; j++) { sequence of statements } } for (k = 0; k < N; k++) { sequence of statements }
3.
A nested loop in which the number of times the inner loop executes depends on the value of the outer loop index:
4. 5. 6. 7. 8. for (i = 0; i < N; i++) { for (j = N; j > i; j--) { sequence of statements } }
ANS:
1. The first loop is O(N) and the second loop is O(M). Since you don't know which is bigger, you say this is O(N+M). This can also be written as O(max(N,M)). In the case where the second loop goes to N instead of M the complexity is O(N). You can see this from either expression above. O(N+M) becomes O(2N) and when you drop the constant it is O(N). O(max(N,M)) becomes O(max(N,N)) which is O(N). 2. The first set of nested loops is O(N2) and the second loop is O(N). This is O(max(N2,N)) which is O(N2). 3. This is very similar to our earlier example of a nested loop where the number of iterations of the inner loop depends on the value of the index of the outer loop. The only difference is that in this example the inner-loop index is counting down from N to i+1. It is still the case that the inner loop executes N times, then N-1, then N-2, etc, so the total number of times the innermost "sequence of statements" execites is O(N2).