public class MaximumSubarray {
public static int[] findMaxSubarray(int[] arr, int low, int high) {
if (low == high) {
return new int[]{low, high, arr[low]};
int mid = (low + high) / 2;
int[] leftSubarray = findMaxSubarray(arr, low, mid);
int[] rightSubarray = findMaxSubarray(arr, mid + 1, high);
int[] crossingSubarray = findMaxCrossingSubarray(arr, low, mid, high);
if (leftSubarray[2] >= rightSubarray[2] && leftSubarray[2] >= crossingSubarray[2]) {
return leftSubarray;
} else if (rightSubarray[2] >= leftSubarray[2] && rightSubarray[2] >= crossingSubarray[2]) {
return rightSubarray;
} else {
return crossingSubarray;
private static int[] findMaxCrossingSubarray(int[] arr, int low, int mid, int high) {
int leftSum = Integer.MIN_VALUE;
int sum = 0;
int maxLeft = mid;
for (int i = mid; i >= low; i--) {
sum += arr[i];
if (sum > leftSum) {
leftSum = sum;
maxLeft = i;
int rightSum = Integer.MIN_VALUE;
sum = 0;
int maxRight = mid + 1;
for (int i = mid + 1; i <= high; i++) {
sum += arr[i];
if (sum > rightSum) {
rightSum = sum;
maxRight = i;
return new int[]{maxLeft, maxRight, leftSum + rightSum};
public static void main(String[] args) {
int[] arr = {-2, 1, -3, 4, -1, 2, 1, -5, 4};
int[] result = findMaxSubarray(arr, 0, arr.length - 1);
System.out.println("Maximum subarray:");
System.out.println("Start index: " + result[0]);
System.out.println("End index: " + result[1]);
System.out.println("Sum: " + result[2]);
}
:بالتأكيد ،سأشرح لك الكود بالعربية سطرا ً بسطر
```java
{ public class MaximumSubarray
```
.الرئيسي الذي يحتوي على الخوارزمية ) (classهذا هو تعريف الصنف
```java
{ )public static int[] findMaxSubarray(int[] arr, int low, int high
```
، `low` ،وأدنى مؤشر` `arrهذه هي الدالة الرئيسية التي تجد المصفوفة الجزئية ذات المجموع األقصى .تأخذ كمدخالت المصفوفة
`high`.وأعلى مؤشر
```java
{ )if (low == high
;}]return new int[]{low, high, arr[low
}
```
.هذه هي حالة األساس للخوارزمية .إذا كان لدينا عنصر واحد فقط ،نعيد مصفوفة تحتوي على مؤشر البداية والنهاية وقيمة العنصر
```java
;int mid = (low + high) / 2
```
.نحسب نقطة المنتصف لتقسيم المصفوفة إلى نصفين
```java
;)int[] leftSubarray = findMaxSubarray(arr, low, mid
;)int[] rightSubarray = findMaxSubarray(arr, mid + 1, high
int[] crossingSubarray = findMaxCrossingSubarray(arr, low, mid, high);
```
نقوم بثالثة استدعاءات تكرارية:
1. للنصف األيسر من المصفوفة
2. للنصف األيمن من المصفوفة
3. للمصفوفة الجزئية التي تعبر نقطة المنتصف
```java
if (leftSubarray[2] >= rightSubarray[2] && leftSubarray[2] >= crossingSubarray[2]) {
return leftSubarray;
} else if (rightSubarray[2] >= leftSubarray[2] && rightSubarray[2] >= crossingSubarray[2]) {
return rightSubarray;
} else {
return crossingSubarray;
```
نقارن مجاميع المصفوفات الجزئية الثالث ونعيد المصفوفة ذات المجموع األكبر.
```java
private static int[] findMaxCrossingSubarray(int[] arr, int low, int mid, int high) {
```
هذه الدالة تجد المصفوفة الجزئية ذات المجموع األقصى التي تعبر نقطة المنتصف.
```java
int leftSum = Integer.MIN_VALUE;
int sum = 0;
int maxLeft = mid;
for (int i = mid; i >= low; i--) {
sum += arr[i];
if (sum > leftSum) {
leftSum = sum;
maxLeft = i;
```
محتفظين بأكبر مجموع نجده وموقعه،نبدأ من المنتصف ونتحرك نحو اليسار.
```java
int rightSum = Integer.MIN_VALUE;
sum = 0;
int maxRight = mid + 1;
for (int i = mid + 1; i <= high; i++) {
sum += arr[i];
if (sum > rightSum) {
rightSum = sum;
maxRight = i;
```
نكرر نفس العملية ولكن نتحرك من المنتصف نحو اليمين.
```java
return new int[]{maxLeft, maxRight, leftSum + rightSum};
```
ومجموع المصفوفة الجزئية العابرة للمنتصف، مؤشر النهاية،نعيد مصفوفة تحتوي على مؤشر البداية.
```java
public static void main(String[] args) {
int[] arr = {-2, 1, -3, 4, -1, 2, 1, -5, 4};
int[] result = findMaxSubarray(arr, 0, arr.length - 1);
System.out.println("Maximum subarray:");
System.out.println("Start index: " + result[0]);
System.out.println("End index: " + result[1]);
System.out.println("Sum: " + result[2]);
```
` في الدالة الرئيسيةmain` ونستدعي الدالة، نقوم بإنشاء مصفوفة لالختبار، `findMaxSubarray` ثم نطبع النتيجة،.
هل هناك أي جزء من الكود ترغب في شرح إضافي له؟
Certainly. I'll go through the code and provide the complexity of each significant part, then we'll
derive the overall complexity.
Let's analyze the complexity line by line:
```java
public static int[] findMaxSubarray(int[] arr, int low, int high) {
if (low == high) {
return new int[]{low, high, arr[low]};
// O(1) - constant time operations
int mid = (low + high) / 2;
// O(1) - constant time operation
int[] leftSubarray = findMaxSubarray(arr, low, mid);
// T(n/2) - recursive call on half of the array
int[] rightSubarray = findMaxSubarray(arr, mid + 1, high);
// T(n/2) - recursive call on half of the array
int[] crossingSubarray = findMaxCrossingSubarray(arr, low, mid, high);
// O(n) - linear time operation (we'll analyze this function separately)
// The following if-else block is O(1) - constant time comparisons and return
if (leftSubarray[2] >= rightSubarray[2] && leftSubarray[2] >= crossingSubarray[2]) {
return leftSubarray;
} else if (rightSubarray[2] >= leftSubarray[2] && rightSubarray[2] >= crossingSubarray[2]) {
return rightSubarray;
} else {
return crossingSubarray;
```
Now, let's analyze `findMaxCrossingSubarray`:
```java
private static int[] findMaxCrossingSubarray(int[] arr, int low, int mid, int high) {
// All variable declarations and initializations are O(1)
for (int i = mid; i >= low; i--) {
// This loop runs at most n/2 times
// All operations inside are O(1)
}
// Total: O(n/2)
for (int i = mid + 1; i <= high; i++) {
// This loop also runs at most n/2 times
// All operations inside are O(1)
// Total: O(n/2)
return new int[]{maxLeft, maxRight, leftSum + rightSum};
// O(1)
```
The `findMaxCrossingSubarray` function has a complexity of O(n), as it performs two loops each
running n/2 times.
Now, let's derive the overall complexity:
1. The `findMaxSubarray` function makes two recursive calls, each on half of the input size.
2. It also calls `findMaxCrossingSubarray`, which takes O(n) time.
We can express this as a recurrence relation:
T(n) = 2T(n/2) + O(n)
This is the same recurrence relation we discussed earlier, which resolves to O(n log n) using the
Master Theorem.
To break it down:
- The algorithm divides the problem into two subproblems of size n/2.
- It performs this division log n times (because each time we divide by 2, until we reach the base case
of a single element).
- At each level of recursion, we do O(n) work in the `findMaxCrossingSubarray` function.
Therefore, we have log n levels, each doing O(n) work, giving us a total complexity of O(n log n).
The space complexity is O(log n) due to the recursive call stack. Each recursive call uses constant
extra space, and the maximum depth of the recursion tree is log n.
In summary:
- Time Complexity: O(n log n)
- Space Complexity: O(log n)
This analysis confirms our earlier conclusion about the algorithm's complexity.
Of course, I'd be happy to walk you through the process of computing the complexity step by step.
This is an important skill in algorithm analysis.
Step 1: Identify the recursive structure
Our main function, `findMaxSubarray`, is recursive. It calls itself twice with smaller inputs and also
calls another function, `findMaxCrossingSubarray`. This is key to understanding its complexity.
Step 2: Analyze the non-recursive parts
Let's look at `findMaxCrossingSubarray` first:
- It has two loops, each iterating over at most n/2 elements.
- The operations inside the loops are constant time.
- Total time for this function: O(n/2 + n/2) = O(n)
Step 3: Set up the recurrence relation
In `findMaxSubarray`:
- We make two recursive calls, each on an input of size n/2.
- We call `findMaxCrossingSubarray`, which takes O(n) time.
- The rest of the operations (comparisons, etc.) take constant time.
We can express this as a recurrence relation:
T(n) = 2T(n/2) + O(n)
Where:
- T(n) is the time for an input of size n
- 2T(n/2) represents the two recursive calls
- O(n) represents the time for `findMaxCrossingSubarray` and other operations
Step 4: Solve the recurrence relation
We can solve this using the Master Theorem. The Master Theorem states that for a recurrence of
the form:
T(n) = aT(n/b) + f(n)
Where a ≥ 1, b > 1, and f(n) is a positive function, the complexity is determined by comparing f(n)
with n^(log_b(a)).
In our case:
a = 2 (we make 2 recursive calls)
b = 2 (we divide the input size by 2 each time)
f(n) = n (the non-recursive part is linear)
We need to compare n^(log_2(2)) = n^1 = n with f(n) = n.
Since f(n) = Θ(n^(log_b(a))), we fall into case 2 of the Master Theorem, which states:
If f(n) = Θ(n^(log_b(a))), then T(n) = Θ(n^(log_b(a)) * log n)
Step 5: Conclude the complexity
Plugging in our values:
T(n) = Θ(n^(log_2(2)) * log n) = Θ(n * log n) = O(n log n)
Therefore, the time complexity of our algorithm is O(n log n).
Step 6: Analyze space complexity
For space complexity, we consider the maximum depth of the recursion tree:
- The input size is halved in each recursive call.
- The recursion stops when we reach a single element.
- This means the depth of the recursion tree is log n.
- Each recursive call uses constant extra space.
Therefore, the space complexity is O(log n).
This step-by-step process allows us to rigorously determine the time and space complexity of our
divide-and-conquer algorithm for the Maximum Subarray Problem.