Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
27 views79 pages

DS Practice Set

DS practice paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views79 pages

DS Practice Set

DS practice paper
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

DS PRACTICE SET

01 What is linear and non linear data structure. Differentiate them with
examples.

Ans 01- Data structures are a fundamental aspect of computer science, used to organize and store data
efficiently. They can generally be categorized into linear and non-linear data structures, based on how they
organize data.

### Linear Data Structures

**Definition**: In linear data structures, data elements are organized in a sequential manner, and each element is
connected to its previous and next element. This structure allows for a single level of data organization.

**Characteristics**:

- Each element has a unique predecessor and successor (except the first and last elements).

- Data elements can be traversed in a single run, either forward or backward.

- Memory allocation can be contiguous (arrays) or linked (linked lists).

**Examples**:

1. **Arrays**:

- A collection of elements identified by index or key, stored in contiguous memory locations.

- **Example**: An array of integers: `[1, 2, 3, 4, 5]`.

2. **Linked Lists**:

- A collection of nodes where each node contains data and a reference (link) to the next node.

- **Example**: A singly linked list: `A -> B -> C -> D`.

3. **Stacks**:

- A collection of elements that follows the Last In First Out (LIFO) principle. The last element added is the first to
be removed.

- **Example**: A stack of plates, where you can only add or remove the top plate.
4. **Queues**:

- A collection of elements that follows the First In First Out (FIFO) principle. The first element added is the first to
be removed.

- **Example**: A line of customers at a checkout, where the first customer in line is the first to be served.

### Non-Linear Data Structures

**Definition**: In non-linear data structures, data elements are organized hierarchically or in a more complex
manner, and elements may have multiple relationships with other elements. This structure allows for greater
complexity and can represent relationships between data more effectively.

**Characteristics**:

- Elements may not have a unique predecessor or successor.

- Data cannot be traversed in a single run; multiple paths can be taken.

- Memory allocation does not follow a contiguous pattern.

**Examples**:

1. **Trees**:

- A hierarchical data structure consisting of nodes, with a single root node at the top and several levels of
additional nodes (children) below it.

- **Example**: A binary tree where each node has at most two children:

```

/\

B C

/\

D E

```

2. **Graphs**:
- A collection of nodes (vertices) and edges (connections) that can represent various relationships between
elements. In graphs, nodes can be connected in multiple ways.

- **Example**: A social network where each user is a node and friendships are edges:

```

A -- B

| |

C -- D

```

3. **Hash Tables**:

- A data structure that uses keys to access values. Keys are hashed to produce an index where the value is stored,
allowing for quick lookups.

- **Example**: A dictionary where you can quickly retrieve meanings by their words.

### Differentiation Summary

| Feature | Linear Data Structures | Non-Linear Data Structures |

|------------------------------|------------------------------------------|---------------------------------|

| **Organization** | Sequential (one level) | Hierarchical or graph-like |

| **Memory Usage** | Contiguous or linked | Non-contiguous |

| **Element Relations** | Each element has a unique predecessor and successor | Elements can have multiple
relationships or references |

| **Traversal** | Single path, simple traversal | Multiple paths, complex traversal|

| **Examples** | Arrays, Stacks, Queues, Linked Lists | Trees, Graphs, Hash Tables |

### Conclusion

In summary, the choice between linear and non-linear data structures depends on the specific requirements of the
application, such as the nature of the data, the relationships between elements, and the types of operations that
need to be performed efficiently. Understanding both types is crucial for effective software development and
algorithm design.
02 What are asymptotic notations? Explain the significance of O-
notation (Big-O), Ω-notation (Big-Omega), and Θnotation (Big-Theta)
with examples.
Asymptotic notations are mathematical tools used to analyze the efficiency of algorithms in terms of time and space
complexity. They provide a way to express the runtime or space requirement of an algorithm as a function of the size of the
input data, typically denoted as \( n \). The three most common asymptotic notations are:

1. **Big O Notation (O-notation)**

2. **Big Omega Notation (Ω-notation)**

3. **Big Theta Notation (Θ-notation)**

### 1. Big O Notation (O-notation)

**Definition**: Big O notation provides an upper bound on the growth rate of an algorithm's running time (or space
requirements). In other words, it describes the worst-case scenario for an algorithm's complexity.

- Mathematically, we say that an algorithm's runtime \( T(n) \) is \( O(f(n)) \) if there exist positive constants \( c \) and \( n_0
\) such that:

\[

T(n) \leq c \cdot f(n) \quad \text{for all } n \geq n_0

\]

**Significance**: Big O notation allows us to evaluate the maximum time complexity we can expect from an algorithm,
which is important for understanding its scalability.

**Example**:

- For a linear search algorithm that checks each element in an array, the time complexity can be expressed as:

\[

T(n) = n \quad \Rightarrow \quad T(n) = O(n)

\]

### 2. Big Omega Notation (Ω-notation)


**Definition**: Big Omega notation gives a lower bound on the growth rate of an algorithm's running time. It describes the
best-case scenario for an algorithm's complexity.

- Formally, we say that an algorithm's runtime \( T(n) \) is \( Ω(g(n)) \) if there exist positive constants \( c \) and \( n_0 \)
such that:

\[

T(n) \geq c \cdot g(n) \quad \text{for all } n \geq n_0

\]

**Significance**: Big Omega notation helps us understand the minimum running time we can expect from an algorithm.

**Example**:

- If we consider a binary search algorithm, it performs at least one comparison even if it finds the target element
immediately:

\[

T(n) = \log n \quad \Rightarrow \quad T(n) = Ω(1)

\]

### 3. Big Theta Notation (Θ-notation)

**Definition**: Big Theta notation describes a tight bound on the growth rate of an algorithm's running time. It indicates
that the running time is both \( O(f(n)) \) and \( Ω(g(n)) \).

- Formally, we say that an algorithm's runtime \( T(n) \) is \( Θ(h(n)) \) if there exist positive constants \( c_1, c_2 \) and \(
n_0 \) such that:

\[

c_1 \cdot h(n) \leq T(n) \leq c_2 \cdot h(n) \quad \text{for all } n \geq n_0

\]

**Significance**: Big Theta notation provides a precise asymptotic behavior of an algorithm's complexity, indicating that it
grows at the same rate as \( h(n) \).

**Example**:

- A simple example would be the bubble sort algorithm, which in the average and worst cases has a time complexity of \(
T(n) = n^2 \):
\[

T(n) = Θ(n^2)

\]

### Summary Table

| Notation | Definition | Purpose | Example |

|-------------------|-----------------------------------------------------------------------|-----------------------------------------------|--------------------------
-------|

| **Big O (O-notation)** | Upper bound on the running time \( T(n) \leq c \cdot f(n) \) | Worst-case analysis
| Linear Search: \( O(n) \) |

| **Big Omega (Ω-notation)** | Lower bound on the running time \( T(n) \geq c \cdot g(n) \) | Best-case analysis
| Binary Search: \( Ω(1) \) |

| **Big Theta (Θ-notation)** | Tight bound on the running time \( c_1 \cdot h(n) \leq T(n) \leq c_2 \cdot h(n) \) | Exact
asymptotic behavior | Bubble Sort: \( Θ(n^2) \) |

### Conclusion

Asymptotic notations (Big O, Big Omega, and Big Theta) are vital in computer science for analyzing the efficiency of
algorithms. They provide insights into the algorithm's performance, allowing developers to choose the most suitable
algorithm for a given task based on the expected input size and resource constraints. Understanding these notations is
crucial for evaluating and comparing algorithms effectively.

03 Find the number of elements in each array for arrays First[ 2:25],
Second[-9:20].
To find the number of elements in each of the given arrays, you simply need to calculate the total
number of integers from the starting value to the ending value of each array, inclusive.

### Array First[2:25]

This notation typically represents the inclusive range starting from 2 to 25.

- The formula for calculating the number of elements in an inclusive range \([a, b]\) is:

\[

\text{Number of elements} = b - a + 1
\]

Applying this formula to the array `First[2:25]`:

- Here, \(a = 2\) and \(b = 25\).

\[

\text{Number of elements} = 25 - 2 + 1 = 24

\]

### Array Second[-9:20]

Using the same formula for the array `Second[-9:20]`, which represents the range from -9 to 20:

- Here, \(a = -9\) and \(b = 20\).

\[

\text{Number of elements} = 20 - (-9) + 1 = 20 + 9 + 1 = 30

\]

### Summary

- **Number of elements in the array First[2:25]:** 24

- **Number of elements in the array Second[-9:20]:** 30

Thus, the final counts are:

- **First[2:25]:** 24 elements

- **Second[-9:20]:** 30 elements
04 Explain recursive and iterative Binary search and find the
complexity of it
Both recursive and iterative approaches are used to implement binary search, an efficient algorithm
for finding an item from a sorted list of items. Let’s go through both methods, explain how they work,
and analyze their time and space complexities.

### 1. Recursive Binary Search

**Definition**: Recursive binary search divides the array into halves and checks if the middle element
is equal to the target value. If not, it recursively searches either the left or right half depending on
whether the target is less than or greater than the middle element.

**Algorithm**:

1. Calculate the middle index of the current array segment.

2. If the middle element equals the target, return the index.

3. If the target is less than the middle element, recursively search the left half.

4. If the target is greater than the middle element, recursively search the right half.

5. If the search segment is empty (i.e., low index exceeds high index), return -1 (target not found).

**Python Example**:

```python

def recursive_binary_search(arr, target, low, high):

if low > high:

return -1 # Base case: target not found

mid = (low + high) // 2

if arr[mid] == target:
return mid # Target found

elif arr[mid] > target:

return recursive_binary_search(arr, target, low, mid - 1) # Search left

else:

return recursive_binary_search(arr, target, mid + 1, high) # Search right

```

### 2. Iterative Binary Search

**Definition**: Iterative binary search uses loops to repeatedly narrow down the search range,
eliminating half of the remaining elements on each iteration.

**Algorithm**:

1. Initialize the low and high pointers.

2. While the low index is less than or equal to the high index:

- Calculate the middle index.

- If the middle element equals the target, return the index.

- If the target is less than the middle element, adjust the high pointer to `mid - 1`.

- If the target is greater than the middle element, adjust the low pointer to `mid + 1`.

3. If the search segment is empty, return -1 (target not found).

**Python Example**:

```python

def iterative_binary_search(arr, target):

low = 0

high = len(arr) - 1
while low <= high:

mid = (low + high) // 2

if arr[mid] == target:

return mid # Target found

elif arr[mid] > target:

high = mid - 1 # Search left

else:

low = mid + 1 # Search right

return -1 # Target not found

```

### Complexity Analysis

1. **Time Complexity**:

- Both the recursive and iterative binary search algorithms have a time complexity of \( O(\log n) \).
This is because the search space is halved with each step, leading to logarithmic behavior.

2. **Space Complexity**:

- **Recursive Binary Search**: The space complexity is \( O(\log n) \) due to the call stack used by
the recursive function calls. In the worst case, the depth of the recursive calls is logarithmic.

- **Iterative Binary Search**: The space complexity is \( O(1) \) because it uses a fixed amount of
space (for low, high, and mid indices) regardless of the input size and does not employ recursive calls.

### Summary

- **Recursive Binary Search**:

- Time Complexity: \( O(\log n) \)

- Space Complexity: \( O(\log n) \)


- **Iterative Binary Search**:

- Time Complexity: \( O(\log n) \)

- Space Complexity: \( O(1) \)

Both methods efficiently search for an element in a sorted array, but the iterative approach is
generally preferred in practice due to its lower space complexity.

05 Which searching method you will select for the given array
[15,5,9,26,11,20]. Justify your answer.
To determine which searching method to select for the array `[15, 5, 9, 26, 11, 20]`, we first need to
consider the characteristics of the array and the search methods available.

### Characteristics of the Array

1. **Unsorted vs. Sorted**: The given array `[15, 5, 9, 26, 11, 20]` is **unsorted**. This fact
significantly impacts the choice of searching algorithm.
2. **Search Objective**: The choice of algorithm also depends on the goal, such as whether you're
searching for a specific value or checking for the existence of that value in the array.

### Searching Methods

1. **Linear Search**:

- **Description**: Linear search sequentially checks each element of the array until the desired
value is found or the end of the array is reached.

- **Time Complexity**: \( O(n) \), where \( n \) is the number of elements in the array.

- **Applicability**:

- Can be applied to both sorted and unsorted arrays.

- No prior arrangement of data is required.

2. **Binary Search**:

- **Description**: Binary search divides the sorted array in half and eliminates half of the search
space each time, significantly speeding up the search process.
- **Time Complexity**: \( O(\log n) \), but it requires that the array is sorted.

- **Applicability**:

- Only applicable to sorted arrays.

### Selection Justification

Given that the array `[15, 5, 9, 26, 11, 20]` is unsorted, **linear search** would be the appropriate
choice for searching a specific element in this array.

#### Reasons:

1. **Unsorted Array**: Since the array is not sorted, binary search cannot be applied.

2. **Implementation**: Linear search is straightforward to implement and does not require


additional steps to sort the array.

3. **Simplicity**: For a small array size (6 elements), the performance difference between linear
search and sorting followed by binary search is negligible in real-time usage. Linear search would be
efficient enough without the overhead of sorting first.

### Example Implementation of Linear Search

Here's a simple Python implementation of linear search to find an element (let's say `20`) in the given
array.

```python

def linear_search(arr, target):

for index in range(len(arr)):

if arr[index] == target:

return index # Return the index if found

return -1 # Return -1 if not found

# Example usage
array = [15, 5, 9, 26, 11, 20]

search_target = 20

result = linear_search(array, search_target)

if result != -1:

print(f"Element {search_target} found at index: {result}")

else:

print(f"Element {search_target} not found in the array.")

```

### Conclusion

In conclusion, for the unsorted array `[15, 5, 9, 26, 11, 20]`, the **linear search** method is the most
suitable choice due to its applicability to unsorted data.

6. Given the base address of an array B[1300…..1900] as 1020 and size of each
element is 2 bytes in the memory. Find the address of B[1700].
To find the address of a specific element in an array given its base address and the size of each
element, we can use the following formula:

\[

\text{Address}(B[i]) = \text{Base Address} + (i - \text{Base Index}) \times \text{Size of Each Element}

\]

### Given Data:

- **Base Address of the array**: 1020

- **Base Index**: The first element \( B[1300] \)

- **Size of Each Element**: 2 bytes

- **Index for which we want the address**: 1700


### Step-by-step Calculation:

1. **Identify the Base Index**:

- In this case, the base index is 1300.

2. **Calculate the difference between the desired index and the base index**:

\[

i - \text{Base Index} = 1700 - 1300 = 400

\]

3. **Multiply the difference by the size of each element**:

\[

\text{Offset} = 400 \times 2 = 800 \text{ bytes}

\]

4. **Calculate the address of \( B[1700] \)**:

\[

\text{Address}(B[1700]) = \text{Base Address} + \text{Offset}

\]

\[

\text{Address}(B[1700]) = 1020 + 800 = 1820

\]

### Conclusion

The address of \( B[1700] \) is **1820**.


7. Consider the multi-dimensional array A (5:20, 10:25, 20:40). Suppose Base
(A)=400 with word size = 4. Find the effective indices E1, E2 & E3 and address of
A[10, 15, 25] using row-major order
To find the effective indices \( E1, E2, \) and \( E3 \) as well as the address of the element \( A[10, 15,
25] \) in a multi-dimensional array using row-major order, we need to follow these steps:

### Step 1: Determine the Dimensions of the Array

The given multi-dimensional array \( A(5:20, 10:25, 20:40) \) has three dimensions with specific
ranges:

- **Dimension 1**: \( i \) from 5 to 20 (size = \( 20 - 5 + 1 = 16 \))

- **Dimension 2**: \( j \) from 10 to 25 (size = \( 25 - 10 + 1 = 16 \))

- **Dimension 3**: \( k \) from 20 to 40 (size = \( 40 - 20 + 1 = 21 \))

### Step 2: Calculate the Effective Indices

**Effective index calculation**:

The effective indices \( E1, E2, \) and \( E3 \) can be calculated as follows:

1. **Effective index for i**:

\[

E1 = i - \text{lower bound of dimension 1} = 10 - 5 = 5

\]

2. **Effective index for j**:

\[

E2 = j - \text{lower bound of dimension 2} = 15 - 10 = 5

\]
3. **Effective index for k**:

\[

E3 = k - \text{lower bound of dimension 3} = 25 - 20 = 5

\]

### Step 3: Calculate the Address of \( A[10, 15, 25] \)

**Row-Major Order Address Calculation**:

The address of an element in a multi-dimensional array in row-major order is calculated using the
formula:

\[

\text{Address}(A[i,j,k]) = \text{Base Address} + \left( E1 \times P2 \times P3 + E2 \times P3 + E3


\right) \times \text{Size of Each Element}

\]

Where:

- \( P2 \) is the size of the second dimension, and \( P3 \) is the size of the third dimension.

- \( P2 = \text{number of elements in second dimension} = 16 \)

- \( P3 = \text{number of elements in third dimension} = 21 \)

### Step 4: Substitute Values into the Address Formula

1. **Base Address**: \( 400 \)

2. **Size of each element**: \( 4 \) bytes

3. **Effective Indices**: \( E1 = 5 \), \( E2 = 5 \), \( E3 = 5 \)


Now plug in the values into the address formula:

\[

\text{Address}(A[10, 15, 25]) = 400 + \left( 5 \times 16 \times 21 + 5 \times 21 + 5 \right) \times 4

\]

### Step-by-Step Calculation

1. Calculate \( 5 \times 16 \times 21 \)

\[

5 \times 16 = 80

\]

\[

80 \times 21 = 1680

\]

2. Calculate \( 5 \times 21 \)

\[

5 \times 21 = 105

\]

3. Add them together with \( E3 \):

\[

1680 + 105 + 5 = 1790

\]

4. Multiply by the size of each element:

\[

1790 \times 4 = 7160


\]

5. Finally, calculate the effective address:

\[

\text{Address}(A[10, 15, 25]) = 400 + 7160 = 7560

\]

### Summary

- **Effective Indices**:

- \( E1 = 5 \)

- \( E2 = 5 \)

- \( E3 = 5 \)

- **Address of \( A[10, 15, 25] \)**:

- **7560**

Thus, the final results are:

- **Effective Indices**: \( E1 = 5, E2 = 5, E3 = 5 \)

- **Address of \( A[10, 15, 25] \)**: **7560**

8. Consider an integer array of size 3X3. The address of the first element is 1048.
Calculate the address of the element at index i = 2, j = 1. (0 based
index)[consider both row major and column major order]
To calculate the address of an element in a 2D array given its base address, we can use the formulas
for both row-major and column-major order storage. Let's break down the calculations step-by-step.

### Given Data:

- **Array Size**: 3x3


- **Base Address**: 1048

- **Size of each integer element**: typically, an integer is 4 bytes (unless specified otherwise).

- **Element Indices**: \( i = 2, j = 1 \) (0-based indexing).

### Address Calculation

### **1. Row-Major Order**

In row-major order, the elements of the array are stored row by row. The address of an element \(
A[i][j] \) is calculated using the formula:

\[

\text{Address}(A[i][j]) = \text{Base Address} + ((i \times \text{number of columns}) + j) \times


\text{Size of Each Element}

\]

#### Applying the Formula:

1. **Base Address**: 1048

2. **Number of Columns**: 3 (since it’s a 3x3 array)

3. **Size of Each Element**: 4 bytes (size of an integer)

Substituting the values into the formula:

\[

\text{Address}(A[2][1]) = 1048 + ((2 \times 3) + 1) \times 4

\]

Calculating the expression step-by-step:


1. Calculate the linear index:

\[

(2 \times 3) + 1 = 6 + 1 = 7

\]

2. Multiply by the size of each element:

\[

7 \times 4 = 28

\]

3. Add to the base address:

\[

\text{Address}(A[2][1]) = 1048 + 28 = 1076

\]

### **Row-Major Order Address**: **1076**

### **2. Column-Major Order**

In column-major order, the elements of the array are stored column by column. The address of an
element \( A[i][j] \) is calculated using the formula:

\[

\text{Address}(A[i][j]) = \text{Base Address} + ((j \times \text{number of rows}) + i) \times \text{Size


of Each Element}

\]

#### Applying the Formula:


1. **Base Address**: 1048

2. **Number of Rows**: 3 (since it’s a 3x3 array)

3. **Size of Each Element**: 4 bytes (size of an integer)

Substituting the values into the formula:

\[

\text{Address}(A[2][1]) = 1048 + ((1 \times 3) + 2) \times 4

\]

Calculating the expression step-by-step:

1. Calculate the linear index:

\[

(1 \times 3) + 2 = 3 + 2 = 5

\]

2. Multiply by the size of each element:

\[

5 \times 4 = 20

\]

3. Add to the base address:

\[

\text{Address}(A[2][1]) = 1048 + 20 = 1068

\]

### **Column-Major Order Address**: **1068**


### Summary of Addresses

- **Address of \( A[2][1] \) in Row-Major Order**: **1076**

- **Address of \( A[2][1] \) in Column-Major Order**: **1068**

9. Apply all sorting methods (Bubble, selection, insertion, merge, quick) in the
following given elements of an array A[15,5,9,26,11,20 ]. Analyze which sorting
method you will prefer and why?
Let's apply various sorting algorithms (Bubble Sort, Selection Sort, Insertion Sort, Merge Sort, and
Quick Sort) to the given array \( A = [15, 5, 9, 26, 11, 20] \) and analyze their performance.

### 1. Bubble Sort

**Algorithm**:

Bubble Sort repeatedly steps through the list, compares adjacent elements and swaps them if they are
in the wrong order. This process is repeated until no swaps are needed.

**Passes**:

- Initial: \([15, 5, 9, 26, 11, 20]\)

- 1st Pass: \([5, 9, 15, 11, 20, 26]\) (Initial \(15\) swapped with \(5\))

- 2nd Pass: \([5, 9, 11, 15, 20, 26]\) (Swap \(15\) and \(11\))

- No swaps are needed in the final pass, so the array is sorted.

**Final output**: `[5, 9, 11, 15, 20, 26]`

**Time Complexity**:

- Worst-case: \( O(n^2) \)

- Best-case: \( O(n) \) (already sorted)

**Space Complexity**: \( O(1) \)


---

### 2. Selection Sort

**Algorithm**:

Selection Sort divides the input list into two parts: the sorted part at the left and the unsorted part at
the right. It repeatedly selects the smallest (or largest) element from the unsorted part and moves it
to the sorted part.

**Passes**:

- Initial: \([15, 5, 9, 26, 11, 20]\)

- 1st Pass: \([5, 15, 9, 26, 11, 20]\) (Select \(5\))

- 2nd Pass: \([5, 9, 15, 26, 11, 20]\) (Select \(9\))

- 3rd Pass: \([5, 9, 11, 26, 15, 20]\) (Select \(11\))

- 4th Pass: \([5, 9, 11, 15, 26, 20]\) (Select \(15\))

- 5th Pass: \([5, 9, 11, 15, 20, 26]\) (Select \(20\))

**Final output**: `[5, 9, 11, 15, 20, 26]`

**Time Complexity**:

- Worst-case: \( O(n^2) \)

- Best-case: \( O(n^2) \) (always performs about the same)

**Space Complexity**: \( O(1) \)

---

### 3. Insertion Sort


**Algorithm**:

Insertion Sort builds the sorted array one element at a time by repeatedly taking the next element
from the unsorted portion and inserting it into the appropriate position within the sorted portion.

**Passes**:

- Initial: \([15, 5, 9, 26, 11, 20]\)

- 1st Pass: \([5, 15, 9, 26, 11, 20]\) (Insert \(5\))

- 2nd Pass: \([5, 9, 15, 26, 11, 20]\) (Insert \(9\))

- 3rd Pass: \([5, 9, 11, 15, 26, 20]\) (Insert \(11\))

- 4th Pass: \([5, 9, 11, 15, 20, 26]\) (Insert \(20\))

- 5th Pass: \([5, 9, 11, 15, 20, 26]\) (Insert \(26\))

**Final output**: `[5, 9, 11, 15, 20, 26]`

**Time Complexity**:

- Worst-case: \( O(n^2) \)

- Best-case: \( O(n) \) (already sorted)

**Space Complexity**: \( O(1) \)

---

### 4. Merge Sort

**Algorithm**:

Merge Sort is a divide-and-conquer algorithm that divides the array into halves, sorts each half, and
then merges them back together.
**Steps**:

- Split: \([15, 5, 9, 26, 11, 20] \rightarrow [15, 5, 9]\) and \([26, 11, 20]\)

- Split further: \([15, 5, 9] \rightarrow [15] \text{ and } [5, 9]\)

- Split: \([5, 9] \rightarrow [5] \text{ and } [9]\)

- Merge: \([5, 9]\) becomes \([5, 9]\)

- Merge back \([15] \text{ and } [5, 9] \rightarrow [5, 9, 15]\)

- Now merge the other half: \([26, 11, 20] \rightarrow [26] \text{ and } [11, 20]\)

- Merge \( [11] \text{ and } [20] \rightarrow [11, 20] \)

- Finally merge: \([26] \text{ and } [11, 20] \rightarrow [11, 20, 26]\)

- Merge the two sorted halves: \([5, 9, 15] \text{ and } [11, 20, 26]\)

**Final output**: `[5, 9, 11, 15, 20, 26]`

**Time Complexity**:

- Worst-case: \( O(n \log n) \)

- Best-case: \( O(n \log n) \)

**Space Complexity**: \( O(n) \) (auxiliary array)

---

### 5. Quick Sort

**Algorithm**:

Quick Sort also uses a divide-and-conquer approach. It selects a "pivot" element and partitions the
array into elements less than the pivot and elements greater than the pivot, then recursively sorts the
partitions.
**Steps**:

- Initial: \( [15, 5, 9, 26, 11, 20] \)

- Choose \( 15 \) as the pivot:

- Partition: \( [5, 9, 11, 15, 26, 20] \)

- Recursively sort \( [5, 9, 11] \) and \( [26, 20] \):

- For \( [5, 9, 11] \), using \( 9 \), partition results in \( [5, 9, 11] \)

- For \( [26, 20] \), using \( 26 \), results in \( [20, 26] \)

- Combine: \( [5, 9, 11, 15, 20, 26] \)

**Final output**: `[5, 9, 11, 15, 20, 26]`

**Time Complexity**:

- Worst-case: \( O(n^2) \) (unbalanced partitions)

- Best-case: \( O(n \log n) \)

**Space Complexity**: \( O(\log n) \) on average (due to recursive call stack)

---

### Summary of Sorting Algorithms

| Algorithm | Final Output | Worst-case Time Complexity | Best-case Time Complexity


| Space Complexity |

|----------------|-----------------------------------|----------------------------|---------------------------|------------------|

| **Bubble Sort** | [5, 9, 11, 15, 20, 26] | \( O(n^2) \) | \( O(n) \) | \( O(1) \)
|

| **Selection Sort** | [5, 9, 11, 15, 20, 26] | \( O(n^2) \) | \( O(n^2) \) | \( O(1) \)
|

| **Insertion Sort** | [5, 9, 11, 15, 20, 26] | \( O(n^2) \) | \( O(n) \) | \( O(1) \) |
| **Merge Sort** | [5, 9, 11, 15, 20, 26] | \( O(n \log n) \) | \( O(n \log n) \) | \( O(n) \)
|

| **Quick Sort** | [5, 9, 11, 15, 20, 26] | \( O(n^2) \) | \( O(n \log n) \) | \( O(\log n)
\) |

### Preferred Sorting Method

**Merge Sort** or **Quick Sort**:

- **Why?**

- Both have superior average and worst-case performance, especially with larger datasets.

- Merge Sort guarantees \( O(n \log n) \) even in the worst-case scenario, while Quick Sort is
generally faster with good average performance but can deteriorate to \( O(n^2) \) if the pivot
selection is poor.

- Merge Sort's use of additional space can be a downside, but it's effective for stability and large
arrays.

**In summary**, for small datasets like the one provided, simpler algorithms like Insertion Sort or
even Bubble Sort could suffice for their straightforward implementation, but for scalability and
efficiency, Quick Sort or Merge Sort is preferred for larger or more complex datasets.

10. How do you justify whether N^5 + N^3 +N + 1 = O (N^9)?


To determine whether \( N^5 + N^3 + N + 1 = O(N^9) \), we can use the definition of Big-O notation.
According to this definition, we say that a function \( f(N) \) is \( O(g(N)) \) if there exist positive
constants \( C \) and \( N_0 \) such that:

\[

f(N) \leq C \cdot g(N) \quad \text{for all } N \geq N_0

\]

In this case, we want to check if:

\[
N^5 + N^3 + N + 1 \leq C \cdot N^9 \quad \text{for sufficiently large } N

\]

### Step-by-Step Justification

1. **Identify the function**:

We have:

- \( f(N) = N^5 + N^3 + N + 1 \)

- \( g(N) = N^9 \)

2. **Analyze \( f(N) \)**:

As \( N \) becomes large, the dominant term in \( f(N) \) is \( N^5 \) because it grows faster than the
other terms. Thus, we can focus on this dominant term for our comparison.

3. **Establish the relationship**:

For large values of \( N \), we can assert:

\[

f(N) \approx N^5 \quad \text{as } N \to \infty

\]

Now, let's show that:

\[

N^5 + N^3 + N + 1 \leq C \cdot N^9

\]

for some constant \( C \) and for sufficiently large \( N \).

4. **Finding \( C \) and \( N_0 \)**:

For large \( N \), the additional terms \( N^3, N, \) and \( 1 \) become relatively insignificant
compared to \( N^5 \).
To simplify, we can use the fact that:

\[

N^5 + N^3 + N + 1 \leq N^5 + N^5 + N^5 + N^5 \quad \text{for sufficiently large } N

\]

This is because \( N^5 \) will eventually be much greater than \( N^3, N, \) and \( 1 \) for large \( N
\).

Thus, it follows that:

\[

N^5 + N^3 + N + 1 \leq 4N^5 \quad \text{for large enough } N

\]

5. **Inequality Check**:

Now we check if:

\[

4N^5 \leq C \cdot N^9

\]

for \( C = 4 \):

\[

4N^5 \leq 4N^9 \quad \Rightarrow \quad 1 \leq N^4 \quad \Rightarrow \quad N \geq 1

\]

This inequality holds true for \( N \geq 1 \).

### Conclusion

We have shown that:

\[

N^5 + N^3 + N + 1 \leq 4N^5 \leq 4N^9 \quad \text{for } N \geq 1


\]

Thus, we can conclude:

\[

N^5 + N^3 + N + 1 = O(N^9)

\]

In summary, \( N^5 + N^3 + N + 1 \) is indeed \( O(N^9) \).

11. Implement singly linked list. Doubly linked list and circular linked list.
Perform all the given operations (AddFirst, AddLast, AddSpecific, DeleteFirst,
DeleteLast, DeleteSpecific).
Here's the implementation of Singly Linked List, Doubly Linked List, and Circular
Linked List in Java, covering the required operations: AddFirst, AddLast,
AddSpecific, DeleteFirst, DeleteLast, and DeleteSpecific.

---

1. Singly Linked List (Java)

class Node {
int data;
Node next;

Node(int data) {
this.data = data;
this.next = null;
}
}

public class SinglyLinkedList {


Node head;

// Add at the beginning


public void AddFirst(int data) {
Node newNode = new Node(data);
newNode.next = head;
head = newNode;
}

// Add at the end


public void AddLast(int data) {
Node newNode = new Node(data);
if (head == null) {
head = newNode;
return;
}
Node temp = head;
while (temp.next != null) {
temp = temp.next;
}
temp.next = newNode;
}

// Add at specific position


public void AddSpecific(int data, int position) {
if (position == 1) {
AddFirst(data);
return;
}
Node newNode = new Node(data);
Node temp = head;
for (int i = 1; i < position - 1; i++) {
if (temp == null) throw new IndexOutOfBoundsException("Position out of
bounds");
temp = temp.next;
}
newNode.next = temp.next;
temp.next = newNode;
}

// Delete first node


public void DeleteFirst() {
if (head == null) return;
head = head.next;
}

// Delete last node


public void DeleteLast() {
if (head == null) return;
if (head.next == null) {
head = null;
return;
}
Node temp = head;
while (temp.next.next != null) {
temp = temp.next;
}
temp.next = null;
}

// Delete specific node


public void DeleteSpecific(int position) {
if (head == null) return;
if (position == 1) {
DeleteFirst();
return;
}
Node temp = head;
for (int i = 1; i < position - 1; i++) {
if (temp.next == null) throw new IndexOutOfBoundsException("Position
out of bounds");
temp = temp.next;
}
if (temp.next == null) throw new IndexOutOfBoundsException("Position out
of bounds");
temp.next = temp.next.next;

---

2. Doubly Linked List (Java)

class DNode {

int data;

DNode next;

DNode prev;

DNode(int data) {

this.data = data;

this.next = null;

this.prev = null;

}
public class DoublyLinkedList {

DNode head;

// Add at the beginning

public void AddFirst(int data) {

DNode newNode = new DNode(data);

if (head != null) {

head.prev = newNode;

newNode.next = head;

head = newNode;

// Add at the end

public void AddLast(int data) {

DNode newNode = new DNode(data);

if (head == null) {

head = newNode;

return;

DNode temp = head;

while (temp.next != null) {

temp = temp.next;

temp.next = newNode;

newNode.prev = temp;

// Add at specific position


public void AddSpecific(int data, int position) {

if (position == 1) {

AddFirst(data);

return;

DNode newNode = new DNode(data);

DNode temp = head;

for (int i = 1; i < position - 1; i++) {

if (temp == null) throw new IndexOutOfBoundsException("Position out of bounds");

temp = temp.next;

newNode.next = temp.next;

if (temp.next != null) temp.next.prev = newNode;

temp.next = newNode;

newNode.prev = temp;

// Delete first node

… public void DeleteLast() {

if (head == null) return;

if (head.next == null) {

head = null;

return;

DNode temp = head;

while (temp.next != null) {

temp = temp.next;

temp.prev.next = null;
}

// Delete specific node

public void DeleteSpecific(int position) {

if (head == null) return;

if (position == 1) {

DeleteFirst();

return;

DNode temp = head

12. Explain Sparse Matric and its representations using Array.


Sparse Matrix: Overview

A sparse matrix is a matrix in which most of the elements are zero. It is contrasted with a dense
matrix, where most of the elements are non-zero. Sparse matrices are often encountered in scientific
computing, engineering, and data science, especially when dealing with large datasets or systems.

Why Use Sparse Matrices?

Memory Efficiency: Storing zeros in large matrices is wasteful. Sparse matrices reduce memory usage
by storing only non-zero elements.

Faster Computations: Operations on sparse matrices are optimized to skip zero values, reducing
computation time.

Applications: Commonly used in network graphs, finite element analysis, image processing, and
natural language processing.
---

Representations of Sparse Matrices Using Arrays

Sparse matrices can be represented in arrays using the following methods:

1. Triplet Representation (Coordinate List or COO Format)

This is the simplest representation, where the matrix is represented as a list of triplets (row, column,
value) for each non-zero element.

Structure:

An array of size n x 3, where n is the number of non-zero elements.

Column 1: Row index of the non-zero element.

Column 2: Column index of the non-zero element.

Column 3: Value of the non-zero element.

Example:

Consider the following 4x5 sparse matrix:

0 0 3 0 4
0 0 5 7 0

0 0 0 0 0

0 2 6 0 0

Triplet Representation:

Row | Column | Value

--------------------

0 |2 |3

0 |4 |4

1 |2 |5

1 |3 |7

3 |1 |2

3 |2 |6

Advantages:

Simple to implement.

Easy to traverse.

Disadvantages:

Inefficient for operations like matrix addition or multiplication due to the need to search for row and
column indices.
---

2. Compressed Sparse Row (CSR) Representation

Also known as Compressed Row Storage, CSR is a more compact and efficient format for row-oriented
operations.

Structure:

Three arrays are used:

1. Values: Contains non-zero elements.

2. Column Indices: Indicates the column index corresponding to each non-zero element.

3. Row Pointers: Contains the index in the values array where each row starts.

Example:

For the same 4x5 sparse matrix:

0 0 3 0 4

0 0 5 7 0

0 0 0 0 0

0 2 6 0 0
CSR Representation:

Values: [3, 4, 5, 7, 2, 6]

Column Indices: [2, 4, 2, 3, 1, 2]

Row Pointers: [0, 2, 4, 4, 6]

Explanation of Row Pointers:

Row 0 starts at index 0 and ends before index 2.

Row 1 starts at index 2 and ends before index 4.

Row 2 has no non-zero elements (both start and end at 4).

Row 3 starts at index 4 and ends before index 6.

Advantages:

Efficient for row-based operations like matrix-vector multiplication.

Reduces storage for very large matrices.

Disadvantages:
More complex to implement than the triplet representation.

Column access is less efficient compared to CSR.

---

3. Compressed Sparse Column (CSC) Representation

Similar to CSR but column-oriented, CSC is useful when column-based operations are more frequent.

Structure:

Three arrays:

1. Values: Contains non-zero elements.

2. Row Indices: Indicates the row index corresponding to each non-zero element.

3. Column Pointers: Contains the index in the values array where each column starts.

Example:
For the same 4x5 sparse matrix:

Values: [2, 3, 5, 6, 7, 4]

Row Indices: [3, 0, 1, 3, 1, 0]

Column Pointers: [0, 1, 3, 4, 5, 6]

---

Comparison of Representations:

---

Conclusion:

Sparse matrix representations optimize memory usage and improve performance by storing and
processing only non-zero elements. Choosing the appropriate representation (COO, CSR, or CSC)
depends on the specific operations and structure of the matrix.

13. Differentiate between head and tail recursion. Give a suitable example for
both recursions.
Head vs. Tail Recursion: Key Differences

Recursion is a programming technique where a function calls itself to solve a smaller instance of the
problem. In recursion, we encounter two main types: head recursion and tail recursion.
---

Head Recursion:

In head recursion, the recursive call occurs at the beginning of the function. The recursive call is made
first, and the operations are performed after the call returns.

Characteristics:

The function processes the recursive call before performing any operations.

Execution starts from the last call in the recursion stack, and operations are performed in reverse
order.

Example:

Calculate the sum of natural numbers up to n using head recursion:

public class HeadRecursionExample {

static void headRecursion(int n) {

if (n > 0) {

headRecursion(n - 1); // Recursive call first

System.out.println(n); // Operation after the recursive call

public static void main(String[] args) {


headRecursion(5);

Output:

Explanation:

In this example, the recursive calls keep reducing n until it reaches 0, at which point they return. Only
then does the function print the values in reverse order, from 1 to 5.

---

Tail Recursion:

In tail recursion, the recursive call is the last operation performed in the function. There is no need to
keep track of previous states because no operation is performed after the recursive call.

Characteristics:

The recursive call is the last statement in the function.

Tail recursion is often optimized by the compiler (tail call optimization) to avoid using extra stack
space.
Example:

Calculate the sum of natural numbers up to n using tail recursion:

public class TailRecursionExample {

static void tailRecursion(int n) {

if (n > 0) {

System.out.println(n); // Operation before the recursive call

tailRecursion(n - 1); // Recursive call last

public static void main(String[] args) {

tailRecursion(5);

Output:

Explanation:
In this example, the function prints n first, then makes the recursive call. Since no operations are
pending after the recursive call, the recursion finishes faster, and values are printed in descending
order.

---

Key Differences Summary:

---

Conclusion:

Head recursion is useful when you need to defer processing until after the recursive call.

Tail recursion is more efficient in terms of stack usage and can be optimized to run in constant stack
space, making it preferable when possible.

14. Write a java program to reverse an array and a linked list.


Java Program to Reverse an Array and a Linked List

---

1. Reverse an Array

import java.util.Arrays;

public class ReverseArray {

public static void reverseArray(int[] array) {


int start = 0;

int end = array.length - 1;

while (start < end) {

// Swap elements at start and end

int temp = array[start];

array[start] = array[end];

array[end] = temp;

// Move the pointers

start++;

end--;

public static void main(String[] args) {

int[] array = {1, 2, 3, 4, 5};

System.out.println("Original Array: " + Arrays.toString(array));

reverseArray(array);

System.out.println("Reversed Array: " + Arrays.toString(array));

Output:

Original Array: [1, 2, 3, 4, 5]

Reversed Array: [5, 4, 3, 2, 1]

Explanation:
This program swaps the elements at the start and end of the array until the pointers meet in the
middle.

---

2. Reverse a Linked List

class Node {

int data;

Node next;

Node(int data) {

this.data = data;

this.next = null;

public class ReverseLinkedList {

Node head;

// Add a node at the end of the list

public void addLast(int data) {

Node newNode = new Node(data);

if (head == null) {

head = newNode;

return;

Node temp = head;


while (temp.next != null) {

temp = temp.next;

temp.next = newNode;

// Reverse the linked list

public void reverseList() {

Node previous = null;

Node current = head;

Node next = null;

while (current != null) {

next = current.next; // Store the next node

current.next = previous; // Reverse the current node's pointer

previous = current; // Move pointers one step forward

current = next;

head = previous; // Update the head to the new front

// Print the linked list

public void printList() {

Node temp = head;

while (temp != null) {

System.out.print(temp.data + " ");

temp = temp.next;

System.out.println();
}

public static void main(String[] args) {

ReverseLinkedList list = new ReverseLinkedList();

// Add elements to the list

list.addLast(1);

list.addLast(2);

list.addLast(3);

list.addLast(4);

list.addLast(5);

System.out.println("Original Linked List:");

list.printList();

// Reverse the list

list.reverseList();

System.out.println("Reversed Linked List:");

list.printList();

Output:

Original Linked List:

12345

Reversed Linked List:

54321
Explanation:

The reverseList() method iterates through the linked list and reverses the next pointers of each node.

The process continues until the entire list is reversed, and the head pointer is updated to point to the
new first node.

---

Summary:

Array Reversal: Utilizes two pointers to swap elements from both ends of the array.

Linked List Reversal: Uses three pointers (previous, `current

Java Program to Reverse an Array and a Linked List

---

1. Reverse an Array

import java.util.Arrays;

public class ReverseArray {

public static void reverseArray(int[] array) {

int start = 0;

int end = array.length - 1;


while (start < end) {

// Swap elements at start and end

int temp = array[start];

array[start] = array[end];

array[end] = temp;

// Move the pointers

start++;

end--;

public static void main(String[] args) {

int[] array = {1, 2, 3, 4, 5};

System.out.println("Original Array: " + Arrays.toString(array));

reverseArray(array);

System.out.println("Reversed Array: " + Arrays.toString(array));

Output:

Original Array: [1, 2, 3, 4, 5]

Reversed Array: [5, 4, 3, 2, 1]

Explanation:

This program swaps the elements at the start and end of the array until the pointers meet in the
middle.
---

2. Reverse a Linked List

class Node {

int data;

Node next;

Node(int data) {

this.data = data;

this.next = null;

public class ReverseLinkedList {

Node head;

// Add a node at the end of the list

public void addLast(int data) {

Node newNode = new Node(data);

if (head == null) {

head = newNode;

return;

Node temp = head;

while (temp.next != null) {

temp = temp.next;
}

temp.next = newNode;

// Reverse the linked list

public void reverseList() {

Node previous = null;

Node current = head;

Node next = null;

while (current != null) {

next = current.next; // Store the next node

current.next = previous; // Reverse the current node's pointer

previous = current; // Move pointers one step forward

current = next;

head = previous; // Update the head to the new front

// Print the linked list

public void printList() {

Node temp = head;

while (temp != null) {

System.out.print(temp.data + " ");

temp = temp.next;

System.out.println();

}
public static void main(String[] args) {

ReverseLinkedList list = new ReverseLinkedList();

// Add elements to the list

list.addLast(1);

list.addLast(2);

list.addLast(3);

list.addLast(4);

list.addLast(5);

System.out.println("Original Linked List:");

list.printList();

// Reverse the list

list.reverseList();

System.out.println("Reversed Linked List:");

list.printList();

Output:

Original Linked List:

12345

Reversed Linked List:

54321

Explanation:
The reverseList() method iterates through the linked list and reverses the next pointers of each node.

The process continues until the entire list is reversed, and the head pointer is updated to point to the
new first node.

---

Summary:

Array Reversal: Utilizes two pointers to swap elements from both ends of the array.

Linked List Reversal: Uses three pointers (previous, `current

15. Write a java program to perform arithmetic operations (add and multiply)
on 2 polynomials.
Java Program to Perform Arithmetic Operations (Add and Multiply) on
Polynomials

To represent a polynomial, we'll use a linked list where each node contains a coefficient and an
exponent. The linked list allows efficient traversal and manipulation of polynomial terms.

---

Polynomial Node Definition:

Each node in the polynomial linked list represents a term in the form .
---

Complete Java Program:

class Node {

int coefficient;

int exponent;

Node next;

Node(int coefficient, int exponent) {

this.coefficient = coefficient;

this.exponent = exponent;

this.next = null;

public class PolynomialOperations {

Node head;

// Add a term to the polynomial

public void addTerm(int coefficient, int exponent) {

Node newNode = new Node(coefficient, exponent);

if (head == null) {

head = newNode;

return;

Node temp = head;

while (temp.next != null) {


temp = temp.next;

temp.next = newNode;

// Display the polynomial

public void display() {

Node temp = head;

while (temp != null) {

System.out.print(temp.coefficient + "x^" + temp.exponent);

temp = temp.next;

if (temp != null) System.out.print(" + ");

System.out.println();

// Add two polynomials

public static PolynomialOperations addPolynomials(PolynomialOperations p1,


PolynomialOperations p2) {

PolynomialOperations result = new PolynomialOperations();

Node t1 = p1.head, t2 = p2.head;

while (t1 != null || t2 != null) {

if (t1 == null) {

result.addTerm(t2.coefficient, t2.exponent);

t2 = t2.next;

} else if (t2 == null) {

result.addTerm(t1.coefficient, t1.exponent);

t1 = t1.next;
} else if (t1.exponent == t2.exponent) {

result.addTerm(t1.coefficient + t2.coefficient, t1.exponent);

t1 = t1.next;

t2 = t2.next;

} else if (t1.exponent > t2.exponent) {

result.addTerm(t1.coefficient, t1.exponent);

t1 = t1.next;

} else {

result.addTerm(t2.coefficient, t2.exponent);

t2 = t2.next;

return result;

// Multiply two polynomials

public static PolynomialOperations multiplyPolynomials(PolynomialOperations p1,


PolynomialOperations p2) {

PolynomialOperations result = new PolynomialOperations();

for (Node t1 = p1.head; t1 != null; t1 = t1.next) {

PolynomialOperations tempResult = new PolynomialOperations();

for (Node t2 = p2.head; t2 != null; t2 = t2.next) {

int newCoefficient = t1.coefficient * t2.coefficient;

int newExponent = t1.exponent + t2.exponent;

tempResult.addTerm(newCoefficient, newExponent);

result = addPolynomials(result, tempResult); // Add the intermediate result

}
return result;

public static void main(String[] args) {

// First Polynomial: 3x^3 + 4x^2 + 2

PolynomialOperations p1 = new PolynomialOperations();

p1.addTerm(3, 3);

p1.addTerm(4, 2);

p1.addTerm(2, 0);

System.out.print("First Polynomial: ");

p1.display();

// Second Polynomial: 5x^2 + 1

PolynomialOperations p2 = new PolynomialOperations();

p2.addTerm(5, 2);

p2.addTerm(1, 0);

System.out.print("Second Polynomial: ");

p2.display();

// Addition of Polynomials

PolynomialOperations sum = addPolynomials(p1, p2);

System.out.print("Sum of Polynomials: ");

sum.display();

// Multiplication of Polynomials

PolynomialOperations product = multiplyPolynomials(p1, p2);

System.out.print("Product of Polynomials: ");


product.display();

---

Explanation:

1. Node Class: Represents a term in the polynomial with a coefficient, exponent, and a pointer next to
the next term.

2. PolynomialOperations Class: Contains methods to manipulate polynomials:

addTerm(): Adds a term to the polynomial.

display(): Displays the polynomial in readable form.

addPolynomials(): Adds two polynomials by traversing both lists, comparing exponents, and summing
coefficients where exponents match.

multiplyPolynomials(): Multiplies two polynomials by multiplying each term from the first polynomial
with each term of the second polynomial. It adds intermediate results together using the
addPolynomials method.
---

Sample Output:

First Polynomial: 3x^3 + 4x^2 + 2

Second Polynomial: 5x^2 + 1

Sum of Polynomials: 3x^3 + 9x^2 + 3

Product of Polynomials: 15x^5 + 3x^3 + 20x^4 + 4x^2 + 10x^2 + 2

---

Key Concepts Used:

Linked List Representation: Efficient for handling polynomials of varying sizes and degrees.

Polynomial Addition: Combines terms with the same exponent.

Polynomial Multiplication: Uses distributive property, multiplying each term and adding results for
matching exponents.

This approach ensures that polynomials are manipulated dynamically, making the solution scalable
for complex operations.

16. Write a recursive java program to perform sum of n elements of an array.


Recursive Java Program to Find the Sum of N Elements in an Array

In this program, we'll calculate the sum of elements in an array using recursion. The recursive
approach breaks down the problem into smaller sub-problems, summing elements one by one until
the base case is reached.
---

Java Code:

public class ArraySumRecursive {

// Recursive method to calculate the sum of elements in the array

static int sumArray(int[] array, int n) {

// Base case: if the array is empty or we reach the start

if (n <= 0) {

return 0;

// Recursive case: add the last element and recurse for the rest

return array[n - 1] + sumArray(array, n - 1);

public static void main(String[] args) {

int[] array = {1, 2, 3, 4, 5}; // Example array

int n = array.length; // Length of the array

int sum = sumArray(array, n); // Call the recursive method

System.out.println("Sum of array elements: " + sum);

---
Explanation:

1. Recursive Function:

The function sumArray() takes two parameters: the array and its size n.

Base Case: If n (size of the array to consider) is 0, the sum is 0 (terminates recursion).

Recursive Case: Adds the last element (array[n-1]) and calls itself with a smaller size (n-1).

2. Main Function:

Initializes the array and passes its length to the recursive function.

Prints the final sum returned by the recursive method.

---

Output:

Sum of array elements: 15


---

How Recursion Works (Step-by-step):

For the array {1, 2, 3, 4, 5}:

Step 1: sumArray(array, 5) → array[4] (5) + sumArray(array, 4)

Step 2: sumArray(array, 4) → array[3] (4) + sumArray(array, 3)

Step 3: sumArray(array, 3) → array[2] (3) + sumArray(array, 2)

Step 4: sumArray(array, 2) → array[1] (2) + sumArray(array, 1)

Step 5: sumArray(array, 1) → array[0] (1) + `sumArray(array,

17. What java programs: a. Merge two arrays b. Rotate an array by 2 c. Delete
an element from array d. Frequency of an element in an array e. Find missing
number in an array f. Find smallest and largest numbers in an array g. To add,
multiply, and transpose of two matrix
Java Programs for Array and Matrix Operations:

---

a. Merge Two Arrays

import java.util.Arrays;
public class MergeArrays {

public static void main(String[] args) {

int[] array1 = {1, 2, 3};

int[] array2 = {4, 5, 6};

int[] mergedArray = new int[array1.length + array2.length];

System.arraycopy(array1, 0, mergedArray, 0, array1.length);

System.arraycopy(array2, 0, mergedArray, array1.length, array2.length);

System.out.println("Merged Array: " + Arrays.toString(mergedArray));

---

b. Rotate an Array by 2 Positions

import java.util.Arrays;

public class RotateArray {

public static void main(String[] args) {

int[] array = {1, 2, 3, 4, 5};

int n = 2; // Number of positions to rotate

int length = array.length;

int[] rotatedArray = new int[length];

for (int i = 0; i < length; i++) {


rotatedArray[(i + n) % length] = array[i];

System.out.println("Rotated Array: " + Arrays.toString(rotatedArray));

---

c. Delete an Element from an Array

import java.util.Arrays;

public class DeleteElement {

public static void main(String[] args) {

int[] array = {1, 2, 3, 4, 5};

int elementToDelete = 3;

int[] newArray = Arrays.stream(array)

.filter(num -> num != elementToDelete)

.toArray();

System.out.println("Array after deleting " + elementToDelete + ": " + Arrays.toString(newArray));

---
d. Frequency of an Element in an Array

public class FrequencyCount {

public static void main(String[] args) {

int[] array = {1, 2, 3, 2, 4, 2, 5};

int element = 2;

int frequency = 0;

for (int num : array) {

if (num == element) {

frequency++;

System.out.println("Frequency of " + element + ": " + frequency);

---

e. Find the Missing Number in an Array (1 to N)

public class MissingNumber {

public static void main(String[] args) {

int[] array = {1, 2, 4, 5, 6}; // Missing number: 3

int n = array.length + 1;

int sum = (n * (n + 1)) / 2; // Sum of first n natural numbers


for (int num : array) {

sum -= num; // Subtract each element from the sum

System.out.println("Missing Number: " + sum);

---

f. Find the Smallest and Largest Numbers in an Array

public class MinMaxArray {

public static void main(String[] args) {

int[] array = {1, 5, 2, 9, 3, 8};

int min = array[0], max = array[0];

for (int num : array) {

if (num < min) min = num;

if (num > max) max = num;

System.out.println("Smallest Number: " + min);

System.out.println("Largest Number: " + max);

}
---

g. Add, Multiply, and Transpose Two Matrices

public class MatrixOperations {

public static void main(String[] args) {

int[][] matrix1 = {{1, 2}, {3, 4}};

int[][] matrix2 = {{5, 6}, {7, 8}};

addMatrices(matrix1, matrix2);

multiplyMatrices(matrix1, matrix2);

transposeMatrix(matrix1);

static void addMatrices(int[][] a, int[][] b) {

int[][] result = new int[a.length][a[0].length];

for (int i = 0; i < a.length; i++) {

for (int j = 0; j < a[0].length; j++) {

result[i][j] = a[i][j] + b[i][j];

System.out.println("Addition Result:");

printMatrix(result);

static void multiplyMatrices(int[][] a, int[][] b) {

int[][] result = new int[a.length][b[0].length];

for (int i = 0; i < a.length; i++) {


for (int j = 0; j < b[0].length; j++) {

for (int k = 0; k < a[0].length; k++) {

result[i][j] += a[i][k] * b[k][j];

System.out.println("Multiplication Result:");

printMatrix(result);

static void transposeMatrix(int[][] matrix) {

int[][] transposed = new int[matrix[0].length][matrix.length];

for (int i = 0; i < matrix.length; i++) {

for (int j = 0; j < matrix[0].length; j++) {

transposed[j][i] = matrix[i][j];

System.out.println("Transposed Matrix:");

printMatrix(transposed);

static void printMatrix(int[][] matrix) {

for (int[] row : matrix) {

for (int value : row) {

System.out.print(value + " ");

System.out.println();

}
}

---

Summary:

Array Operations: Focus on core tasks like merging, rotating, and manipulating array elements.

Matrix Operations: Covers addition, multiplication, and transposition, which are fundamental in linear
algebra and computer science.

18. Write a Java program to perform the following computations on a given 2D


matrix: Compute P+Q where: P is the sum of all numbers divisible by 3 located
in columns with odd indices (0-based index) of the matrix. Q is the sum of all
numbers divisible by 5 located in rows with even indices (0-based index) of the
matrix. Compute M- N where: M is the sum of all elements on the main diagonal
of the matrix (elements where row index = column index). N is the sum of all
elements on the anti-diagonal of the matrix (elements where row index +
column index = matrix size - 1). Instructions: Implement a Java program to
calculate the values for P+ Q and M- N based on an input matrix. Ensure that
your program can handle matrices of varying sizes, including non-square
matrices and edge cases like empty matrices or matrices with only one
row/column. Provide meaningful error messages or fallback outputs for edge
cases
Java Program for Matrix Computations (P+Q and M-N)

This program computes:

1. P + Q:

P: Sum of numbers divisible by 3 in columns with odd indices.


Q: Sum of numbers divisible by 5 in rows with even indices.

2. M - N:

M: Sum of elements on the main diagonal.

N: Sum of elements on the anti-diagonal.

---

Java Code:

public class MatrixComputations {

public static void main(String[] args) {

int[][] matrix = {

{2, 3, 6},

{10, 15, 7},

{9, 5, 12}

};

computePplusQ(matrix);
computeMminusN(matrix);

// Method to compute P + Q

public static void computePplusQ(int[][] matrix) {

int P = 0; // Sum of numbers divisible by 3 in columns with odd indices

int Q = 0; // Sum of numbers divisible by 5 in rows with even indices

// Check if the matrix is not empty

if (matrix.length == 0 || matrix[0].length == 0) {

System.out.println("Matrix is empty.");

return;

int rows = matrix.length;

int cols = matrix[0].length;

// Calculate P

for (int i = 0; i < rows; i++) {

for (int j = 1; j < cols; j += 2) { // Odd-indexed columns

if (matrix[i][j] % 3 == 0) {

P += matrix[i][j];

// Calculate Q

for (int i = 0; i < rows; i += 2) { // Even-indexed rows

for (int j = 0; j < cols; j++) {


if (matrix[i][j] % 5 == 0) {

Q += matrix[i][j];

System.out.println("P (Sum of numbers divisible by 3 in odd columns): " + P);

System.out.println("Q (Sum of numbers divisible by 5 in even rows): " + Q);

System.out.println("P + Q: " + (P + Q));

// Method to compute M - N

public static void computeMminusN(int[][] matrix) {

int M = 0; // Sum of elements on the main diagonal

int N = 0; // Sum of elements on the anti-diagonal

int rows = matrix.length;

int cols = matrix[0].length;

// Ensure the matrix is square for diagonal operations

if (rows != cols) {

System.out.println("Matrix is not square, cannot compute main and anti-diagonal sums.");

return;

// Calculate M (main diagonal)

for (int i = 0; i < rows; i++) {

M += matrix[i][i];

}
// Calculate N (anti-diagonal)

for (int i = 0; i < rows; i++) {

N += matrix[i][rows - 1 - i];

System.out.println("M (Main diagonal sum): " + M);

System.out.println("N (Anti-diagonal sum): " + N);

System.out.println("M - N: " + (M - N));

---

Explanation:

1. Matrix Input:

The program processes a sample 3x3 matrix. You can replace the matrix initialization with any input
values.

2. P Calculation:

Iterates through columns with odd indices (j = 1, 3, ...).

Checks if elements are divisible by 3.


3. Q Calculation:

Iterates through rows with even indices (i = 0, 2, ...).

Checks if elements are divisible by 5.

4. M and N Calculation:

M: Sum of elements where row index = column index.

N: Sum of elements where row index + column index = matrix size - 1 (anti-diagonal).

5. Edge Cases:

Handles non-square matrices for diagonal sums.

Handles empty matrices gracefully.

---
Sample Output:

For the input matrix:

{2, 3, 6},

{10, 15, 7},

{9, 5, 12}

P (Sum of numbers divisible by 3 in odd columns): 8

Q (Sum of numbers divisible by 5 in even rows): 12

P + Q: 20

M (Main diagonal sum): 29

N (Anti-diagonal sum): 26

M - N: 3

---

Test with Different Matrices:

You can test this code with matrices of different sizes (e.g., non-square or edge cases) to ensure
robustness.

You might also like