Getting a TLE when trying to solve the "Maximum Subarray"problem on Leetcode

I think that every algorithm you learn is another tool in your belt. You also get an idea how to apply different algorithms and their concepts and how to apply those concepts in similar situations. You also develop your “algorithm brain” that gets better intuition on how to solve these things.

For all of this, the more work you do yourself, the better you learn.

2 Likes

Absolutely. It also helps to know there are paradigms for algorithm design. For instance, there are greedy, divide-and-conquer, dynamic programming, network flow, approximation, and probably correct algorithms.

Each algorithm you encounter sits within one of these paradigms. As an example, both quicksort and mergesort are divide-and-conquer algorithms. But they have different properties. Quicksort is O(n^2) in the worst case and isn’t stable. Mergesort is O(n log n) in the worst case and is stable, but requires more space.

Learning the design families/paradigms helps tremendously and gives you a starting point when faced with a problem you’ve never seen.

jrm

2 Likes

But in a job interview I will not be allowed to look stuff up, what will I do then?

Ok.

And also, another question, do you think this algorithm is a hard one? I want to know where I stand and how much more I need to learn.

Thank you for your answer!!

Okay, I will do those things.

Very true. I was doing a problem that could be solved using fast and slow pointers I watched a video on what they were. And it also happened that the next problem could also be solved by the use of fast and slow pointers and I was able to guess correctly that it would need f & s pointers! I wasn’t able to come up with code to solve it though :frowning:

Woah, that’s a lot of stuff. I’ve heard of greedy, dac and dp.

Okay I will learn those things.

Assumes facts not in evidence. Who says you won’t be able to look stuff up during in an interview? I expect this will vary by interview. In any case, you’ve solved the problem. You simply haven’t solved it according to the constraints imposed by Leetcode.

Outside of computational complexity theory, “hard” doesn’t mean much. Experience matters here. As I said, I’m a computer scientist, I have a terminal degree, and I’ve been at this for a couple decades now. What’s hard for me and what’s hard for you won’t be the same.

That said, I think the problem is perfect for a week-long undergraduate homework assignment. I’d want to see the O(n) algorithm developed and analyzed for asymptotic run time, and I’d be driving home the point that multiple linear passes through the data are still linear.

jrm

Okay

True.

I’m not as bad as I thought I was.

What’s that?

So, if I understand correctly you mean that you would make sure the time complexity would be O(n) no matter what?

Again, thanks a lot for your answer I’m learning a lot :slightly_smiling_face:

I don’t think it’s hard in the sense of difficult to implement. It does take a “trick” so that makes it a little “tricky”. I mean the brute force solution is pretty easy, but if you want the good solution, it gets a little tricky. It’s not a particularly difficult trick, but if you haven’t done a lot of these, it can be elusive.

1 Like

Big-O run time analysis.

It should be. The O(n) approach to this problem is relatively straightforward but it requires multiple passes through the data. Students frequently think that O(2*n) > O(n). It isn’t – the multiplier gets washed away in the limit. That’s the point I’d be making.

Speaking of which, have you figured out the O(n) approach yet?

jrm

Okay.

Ok

No :frowning:

So I should correct my previous comment – an O(n) solution only requires a single pass through the data. Comments about the complexity of O(2*n) vs O(n) still stand.

jrm

1 Like

As I understand it, O(n) could require multple passes, but the idea is that it increases proportionally. If your solution requires 4 passes, it is O(4n) which gets reduced to O(n). It isn’t a measure of the number of passes but how it scales. Unless I’m mistaken.

1 Like

Big O() notation only cares about the “complexity” of the data in regard to n.
So O(2n) is identical to O(n) - because the point is, that if the amount of data n doubles, you have double the “need” in either time or space → a linear relationship.

O(2n) still refers to a linear relationship between input-data and need. And that is what matters to determine efficiency in code.
If you want to go for micro-optimization, you wouldn’t bother with big-O notation, as it is WAY to broad.

2 Likes

Yes, that’s true.

The correction I was referring to was needing multiple passes through the data for an O(n) solution to the maximum subarray problem. That’s not true – a single pass is sufficient (see Kadane’s algorithm).

So mea culpa for the confusion! To (hopefully) clarify:

  1. Maximum subarray is solvable in O(n) with a single pass.
  2. In general, O(c*n) is equivalent to O(n) for constant c. This is because complexity analysis is asymptotic. More formally, there’s a limit in the definition of O(). So any constants or additive terms are irrelevant.

jrm