Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Conversation

@rianwouters
Copy link

fixes #101

@sstur
Copy link
Collaborator

sstur commented Jun 15, 2016

Wow, this is a big PR! Did you mean to put all of those commits in this PR? (Because the commits tell a different story than the title)

I appreciate all the work you've done here, most of it looks great and significantly simplifies the logic.

However, I'm very hesitant to land such sweeping changes without a lot more testing. Take, for instance, the very first commit. That's a complicated (arguably convoluted) piece of code that totally needs tests. It would be easy to introduce a regression without realizing. Even if the code is peer reviewed.

I'd suggest you write tests as you go to be sure we're not breaking things as you refactor. But considering you've done a bunch of work already without touching the test directory, I have a different plan:

  • Let's land the fix for the original issue (with a test please).
  • Then let's roll these refactoring pieces up into a series of commits/pull-requests and land those on another branch.
  • Then when we're sure that branch is where we want it to be, we'll cut a release-candidate for a new major version.

I'd love to see continued work on this project (in fact I was working on a major refactor earlier this year also), but it's important we don't break what people are using in production.

I'd like to keep a stable 2.x branch with only critical patches and then a more active 3.x branch with work like yours.

How does that sound?

@rianwouters
Copy link
Author

sounds perfect and I totally agree!
I believe the changes I made do not even solve that PR ;-)
The problem I am still facing is that after a few days the number of open ports increases eventually running out of ports.
Don't hesitate to be inspired though :-)
Note that the very first commit (89af567) works around the other problem I submitted, crashing the server when there are more than a few thousand files. It breaks the concurrent stats feature though.Why do you think its convoluted?
BTW any reason the current npm version is not on github?

groetjes,

Rian

Date: Wed, 15 Jun 2016 06:07:18 -0700
From: [email protected]
To: [email protected]
CC: [email protected]; [email protected]
Subject: Re: [sstur/nodeftpd] Patch stack size problem for large number of file (#102)

Wow, this is a big PR! Did you mean to put all of those commits in this PR? (Because the commits tell a different story than the title)

I appreciate all the work you've done here, most of it looks great and significantly simplifies the logic.

However, I'm very hesitant to land such sweeping changes without a lot more testing. Take, for instance, the very first commit. That's a complicated (arguably convoluted) piece of code that totally needs tests. It would be easy to introduce a regression without realizing. Even if the code is peer reviewed.

I'd suggest you write tests as you go to be sure we're not breaking things as you refactor. But considering you've done a bunch of work already without touching the test directory, I have a different plan:

Let's land the fix for the original issue (with a test please).
Then let's roll these refactoring pieces up into a series of commits/pull-requests and land those on another branch.
Then when we're sure that branch is where we want it to be, we'll cut a release-candidate for a new major version.

I'd love to see continued work on this project (in fact I was working on a major refactor earlier this year also), but it's important we don't break what people are using in production.

I'd like to keep a stable 2.x branch with only critical patches and then a more active 3.x branch with work like yours.

How does that sound?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub, or mute the thread.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

when the number of files exceeds 3170, the LIST command crashes

2 participants