diff --git a/articles/design-twitter-feed.md b/articles/design-twitter-feed.md index 87eeec612..2c53e29a8 100644 --- a/articles/design-twitter-feed.md +++ b/articles/design-twitter-feed.md @@ -325,10 +325,10 @@ class Twitter { ### Time & Space Complexity -* Time complexity: $O(n \log n)$ for $getNewsFeed()$ and $O(1)$ for remaining methods. -* Space complexity: $O(1)$ +* Time complexity: $O(n * m + t\log t)$ for each $getNewsFeed()$ call and $O(1)$ for remaining methods. +* Space complexity: $O(N * m + N * M)$ -> Where $n$ is the number of tweets associated with the $useId$ and its $followeeIds$. +> Where $n$ is the total number of $followeeIds$ associated with the $userId$, $m$ is the maximum number of tweets by any user, $t$ is the total number of tweets associated with the $userId$ and its $followeeIds$, $N$ is the total number of $userIds$ and $M$ is the maximum number of followees for any user. --- @@ -778,7 +778,7 @@ class Twitter { ### Time & Space Complexity -* Time complexity: $O(n+\log n)$ for $getNewsFeed()$ and $O(1)$ for remaining methods. -* Space complexity: $O(1)$ +* Time complexity: $O(n)$ for each $getNewsFeed()$ call and $O(1)$ for remaining methods. +* Space complexity: $O(N * m + N * M + n)$ -> Where $n$ is the number of tweets associated with the $useId$ and its $followeeIds$. \ No newline at end of file +> Where $n$ is the total number of $followeeIds$ associated with the $userId$, $m$ is the maximum number of tweets by any user, $N$ is the total number of $userIds$ and $M$ is the maximum number of followees for any user. \ No newline at end of file diff --git a/articles/kth-largest-element-in-an-array.md b/articles/kth-largest-element-in-an-array.md index b901745c4..ba0f51363 100644 --- a/articles/kth-largest-element-in-an-array.md +++ b/articles/kth-largest-element-in-an-array.md @@ -76,7 +76,7 @@ class Solution { --- -## 2. Heap +## 2. Min-Heap ::tabs-start diff --git a/articles/kth-largest-integer-in-a-stream.md b/articles/kth-largest-integer-in-a-stream.md index 9bc24a5b3..7913b93ca 100644 --- a/articles/kth-largest-integer-in-a-stream.md +++ b/articles/kth-largest-integer-in-a-stream.md @@ -135,7 +135,7 @@ class KthLargest(k: Int, nums: IntArray) { --- -## 2. Heap +## 2. Min-Heap ::tabs-start diff --git a/articles/task-scheduling.md b/articles/task-scheduling.md index 1a3f3c55a..d6718ba3f 100644 --- a/articles/task-scheduling.md +++ b/articles/task-scheduling.md @@ -341,7 +341,7 @@ class Solution { --- -## 2. Heap +## 2. Max-Heap ::tabs-start @@ -609,7 +609,7 @@ class Solution { ### Time & Space Complexity * Time complexity: $O(m)$ -* Space complexity: $O(m)$ +* Space complexity: $O(1)$ since we have at most $26$ different characters. > Where $m$ is the number of tasks. @@ -780,7 +780,7 @@ class Solution { ### Time & Space Complexity * Time complexity: $O(m)$ -* Space complexity: $O(1)$ +* Space complexity: $O(1)$ since we have at most $26$ different characters. > Where $m$ is the number of tasks. @@ -956,6 +956,6 @@ class Solution { ### Time & Space Complexity * Time complexity: $O(m)$ -* Space complexity: $O(1)$ +* Space complexity: $O(1)$ since we have at most $26$ different characters. > Where $m$ is the number of tasks. \ No newline at end of file diff --git a/hints/combination-target-sum.md b/hints/combination-target-sum.md new file mode 100644 index 000000000..cf6d8d47d --- /dev/null +++ b/hints/combination-target-sum.md @@ -0,0 +1,31 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(2^(t/m)) time and O(t/m) space, where t is the given target and m is the minimum value in the given array. +

+
+ +
+
+ Hint 1 +

+ Can you think of this problem in terms of a decision tree, where at each step, we have n decisions, where n is the size of the array? In this decision tree, we can observe that different combinations of paths are formed. Can you think of a base condition to stop extending a path? Maybe you should consider the target value. +

+
+ +
+
+ Hint 2 +

+ We can use backtracking to recursively traverse these paths and make decisions to choose an element at each step. We maintain a variable sum, which represents the sum of all the elements chosen in the current path. We stop this recursive path if sum == target, and add a copy of the chosen elements to the result. How do you implement it? +

+
+ +
+
+ Hint 3 +

+ We recursively select elements, increasing the sum and appending the element to the temporary list, which tracks the chosen elements in the current path. At each step, we have the option to consider all elements in the array, but we only proceed with elements that, when added to sum, do not exceed the target. We iterate through the entire array at each step, choosing elements accordingly. +

+
\ No newline at end of file diff --git a/hints/find-median-in-a-data-stream.md b/hints/find-median-in-a-data-stream.md new file mode 100644 index 000000000..ea07e7519 --- /dev/null +++ b/hints/find-median-in-a-data-stream.md @@ -0,0 +1,39 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(logn) time for addNum(), O(1) time for findMedian(), and O(n) space, where n is the current number of elements. +

+
+ +
+
+ Hint 1 +

+ A naive solution would be to store the data stream in an array and sort it each time to find the median, resulting in O(nlogn) time for each findMedian() call. Can you think of a better way? Perhaps using a data structure that allows efficient insertion and retrieval of the median can make the solution more efficient. +

+
+ +
+
+ Hint 2 +

+ If we divide the array into two parts, we can find the median in O(1) if the left half can efficiently return the maximum and the right half can efficiently return the minimum. These values determine the median. However, the process changes slightly if the total number of elements is odd — in that case, the median is the element from the half with the larger size. Can you think of a data structure which is suitable to implement this? +

+
+ +
+
+ Hint 3 +

+ We can use a Heap (Max-Heap for the left half and Min-Heap for the right half). Instead of dividing the array, we store the elements in these heaps as they arrive in the data stream. But how can you maintain equal halves of elements in these two heaps? How do you implement this? +

+
+ +
+
+ Hint 4 +

+ We initialize a Max-Heap and a Min-Heap. When adding an element, if the element is greater than the minimum element of the Min-Heap, we push it into the Min-Heap; otherwise, we push it into the Max-Heap. If the size difference between the two heaps becomes greater than one, we rebalance them by popping an element from the larger heap and pushing it into the smaller heap. This process ensures that the elements are evenly distributed between the two heaps, allowing us to retrieve the middle element or elements in O(1) time. +

+
\ No newline at end of file diff --git a/hints/k-closest-points-to-origin.md b/hints/k-closest-points-to-origin.md new file mode 100644 index 000000000..277e19160 --- /dev/null +++ b/hints/k-closest-points-to-origin.md @@ -0,0 +1,31 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution as good or better than O(nlogk) time and O(k) space, where n is the size of the input array, and k is the number of points to be returned. +

+
+ +
+
+ Hint 1 +

+ A naive solution would be to sort the array in ascending order based on the distances of the points from the origin (0, 0) and return the first k points. This would take O(nlogn) time. Can you think of a better way? Perhaps you could use a data structure that maintains only k points and allows efficient insertion and removal. +

+
+ +
+
+ Hint 2 +

+ We can use a Max-Heap that keeps the maximum element at its top and allows retrieval in O(1) time. This data structure is ideal because we need to return the k closest points to the origin. By maintaining only k points in the heap, we can efficiently remove the farthest point when the size exceeds k. How would you implement this? +

+
+ +
+
+ Hint 3 +

+ We initialize a Max-Heap that orders points based on their distances from the origin. Starting with an empty heap, we iterate through the array of points, inserting each point into the heap. If the size of the heap exceeds k, we remove the farthest point (the maximum element in the heap). After completing the iteration, the heap will contain the k closest points to the origin. Finally, we convert the heap into an array and return it. +

+
\ No newline at end of file diff --git a/hints/kth-largest-element-in-an-array.md b/hints/kth-largest-element-in-an-array.md new file mode 100644 index 000000000..8d68feba3 --- /dev/null +++ b/hints/kth-largest-element-in-an-array.md @@ -0,0 +1,39 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution as good or better than O(nlogk) time and O(k) space, where n is the size of the input array, and k represents the rank of the largest number to be returned (i.e., the k-th largest element). +

+
+ +
+
+ Hint 1 +

+ A naive solution would be to sort the array in descending order and return the k-th largest element. This would be an O(nlogn) solution. Can you think of a better way? Maybe you should think of a data structure which can maintain only the top k largest elements. +

+
+ +
+
+ Hint 2 +

+ We can use a Min-Heap, which stores elements and keeps the smallest element at its top. When we add an element to the Min-Heap, it takes O(logk) time since we are storing k elements in it. Retrieving the top element (the smallest in the heap) takes O(1) time. How can this be useful for finding the k-th largest element? +

+
+ +
+
+ Hint 3 +

+ The k-th largest element is the smallest element among the top k largest elements. This means we only need to maintain k elements in our Min-Heap to efficiently determine the k-th largest element. Whenever the size of the Min-Heap exceeds k, we remove the smallest element by popping from the heap. How do you implement this? +

+
+ +
+
+ Hint 4 +

+ We initialize an empty Min-Heap. We iterate through the array and add elements to the heap. When the size of the heap exceeds k, we pop from the heap and continue. After the iteration, the top element of the heap is the k-th largest element. +

+
\ No newline at end of file diff --git a/hints/kth-largest-integer-in-a-stream.md b/hints/kth-largest-integer-in-a-stream.md new file mode 100644 index 000000000..2d7d952e8 --- /dev/null +++ b/hints/kth-largest-integer-in-a-stream.md @@ -0,0 +1,39 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(mlogk) time and O(k) space, where m is the number of times add() is called, and k represents the rank of the largest number to be tracked (i.e., the k-th largest element). +

+
+ +
+
+ Hint 1 +

+ A brute force solution would involve sorting the array in every time a number is added using add(), and then returning the k-th largest element. This would take O(m * nlogn) time, where m is the number of calls to add() and n is the total number of elements added. However, do we really need to track all the elements added, given that we only need the k-th largest element? Maybe you should think of a data structure which can maintain only the top k largest elements. +

+
+ +
+
+ Hint 2 +

+ We can use a Min-Heap, which stores elements and keeps the smallest element at its top. When we add an element to the Min-Heap, it takes O(logk) time since we are storing k elements in it. Retrieving the top element (the smallest in the heap) takes O(1) time. How can this be useful for finding the k-th largest element? +

+
+ +
+
+ Hint 3 +

+ The k-th largest element is the smallest element among the top k largest elements. This means we only need to maintain k elements in our Min-Heap to efficiently determine the k-th largest element. Whenever the size of the Min-Heap exceeds k, we remove the smallest element by popping from the heap. How do you implement this? +

+
+ +
+
+ Hint 4 +

+ We initialize a Min-Heap with the elements of the input array. When the add() function is called, we insert the new element into the heap. If the heap size exceeds k, we remove the smallest element (the root of the heap). Finally, the top element of the heap represents the k-th largest element and is returned. +

+
\ No newline at end of file diff --git a/hints/last-stone-weight.md b/hints/last-stone-weight.md new file mode 100644 index 000000000..5404da55d --- /dev/null +++ b/hints/last-stone-weight.md @@ -0,0 +1,23 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution as good or better than O(nlogn) time and O(n) space, where n is the size of the input array. +

+
+ +
+
+ Hint 1 +

+ A naive solution would involve simulating the process by sorting the array at each step and processing the top 2 heaviest stones, resulting in an O(n * nlogn) time complexity. Can you think of a better way? Consider using a data structure that efficiently supports insertion and removal of elements and maintain the sorted order. +

+
+ +
+
+ Hint 2 +

+ We can use a Max-Heap, which allows us to retrieve the maximum element in O(1) time. We initially insert all the weights into the Max-Heap, which takes O(logn) time per insertion. We then simulate the process until only one or no element remains in the Max-Heap. At each step, we pop two elements from the Max-Heap which takes O(logn) time. If they are equal, we do not insert anything back into the heap and continue. Otherwise, we insert the difference of the two elements back into the heap. +

+
\ No newline at end of file diff --git a/hints/permutations.md b/hints/permutations.md new file mode 100644 index 000000000..3f3d59477 --- /dev/null +++ b/hints/permutations.md @@ -0,0 +1,31 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(n * n!) time and O(n) space, where n is the size of the input array. +

+
+ +
+
+ Hint 1 +

+ A permutation is the same as the array but with the numbers arranged in a different order. The given array itself is also considered a permutation. This means we should make a decision at each step to take any element from the array that has not been chosen previously. By doing this recursively, we can generate all permutations. How do you implement it? +

+
+ +
+
+ Hint 2 +

+ We can use backtracking to explore all possible permutation paths. We initialize a temporary list to append the chosen elements and a boolean array of size n (the same size as the input array) to track which elements have been picked so far (true means the element is chosen; otherwise, false). At each step of recursion, we iterate through the entire array, picking elements that have not been chosen previously, and proceed further along that path. Can you think of the base condition to terminate the current recursive path? +

+
+ +
+
+ Hint 3 +

+ We observe that every permutation has the same size as the input array. Therefore, we can append a copy of the list of chosen elements in the current path to the result list if the size of the list equals the size of the input array terminating the current recursive path. +

+
\ No newline at end of file diff --git a/hints/subsets.md b/hints/subsets.md new file mode 100644 index 000000000..63a694ba2 --- /dev/null +++ b/hints/subsets.md @@ -0,0 +1,39 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(n * (2^n)) time and O(n) space, where n is the size of the input array. +

+
+ +
+
+ Hint 1 +

+ It is straightforward that if the array is empty we return an empty array. When we have an array [1] which is of size 1, we have two subsets, [[], [1]] as the output. Can you think why the output is so? +

+
+ +
+
+ Hint 2 +

+ We can see that one subset includes a number, and another does not. From this, we can conclude that we need to find the subsets that include a number and those that do not. This results in 2^n subsets for an array of size n because there are many combinations for including and excluding the array of numbers. Since the elements are unique, duplicate subsets will not be formed if we ensure that we don't pick the element more than once in the current subset. Which algorithm is helpful to generate all subsets, and how would you implement it? +

+
+ +
+
+ Hint 3 +

+ We can use backtracking to generate all possible subsets. We iterate through the given array with an index i and an initially empty temporary list representing the current subset. We recursively process each index, adding the corresponding element to the current subset and continuing, which results in a subset that includes that element. Alternatively, we skip the element by not adding it to the subset and proceed to the next index, forming a subset without including that element. What can be the base condition to end this recursion? +

+
+ +
+
+ Hint 4 +

+ When the index i reaches the end of the array, we append a copy of the subset formed in that particular recursive path to the result list and return. All subsets of the given array are generated from these different recursive paths, which represent various combinations of "include" and "not include" steps for the elements of the array. As we are only iterating from left to right in the array, we don't pick an element more than once. +

+
\ No newline at end of file diff --git a/hints/task-scheduling.md b/hints/task-scheduling.md new file mode 100644 index 000000000..1b351db49 --- /dev/null +++ b/hints/task-scheduling.md @@ -0,0 +1,39 @@ +
+
+ Recommended Time & Space Complexity +

+ You should aim for a solution with O(m) time and O(1) space, where m is the size of the input array. +

+
+ +
+
+ Hint 1 +

+ There are at most 26 different tasks, represented by A through Z. It is more efficient to count the frequency of each task and store it in a hash map or an array of size 26. Can you think of a way to determine which task should be processed first? +

+
+ +
+
+ Hint 2 +

+ We should always process the most frequent task first. After selecting the most frequent task, we must ensure that it is not processed again until after n seconds, due to the cooldown condition. Can you think of an efficient way to select the most frequent task and enforce the cooldown? Perhaps you could use a data structure that allows for O(1) time to retrieve the maximum element and another data structure to cooldown the processed tasks. +

+
+ +
+
+ Hint 3 +

+ We can use a Max-Heap to efficiently retrieve the most frequent task at any given instance. However, to enforce the cooldown period, we must temporarily hold off from reinserting the processed task into the heap. This is where a queue data structure comes in handy. It helps maintain the order of processed tasks. Can you implement this? +

+
+ +
+
+ Hint 4 +

+ We start by calculating the frequency of each task and initialize a variable time to track the total processing time. The task frequencies are inserted into a Max-Heap. We also use a queue to store tasks along with the time they become available after the cooldown. At each step, if the Max-Heap is empty, we update time to match the next available task in the queue, covering idle time. Otherwise, we process the most frequent task from the heap, decrement its frequency, and if it's still valid, add it back to the queue with its next available time. If the task at the front of the queue becomes available, we pop it and reinsert it into the heap. +

+
\ No newline at end of file