-
Notifications
You must be signed in to change notification settings - Fork 5.7k
Make network based tests as concurrent as possible #4271
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Introduces tz_bots, a tradeoff of slightly more complexity in a unit test for ~55% shorter test duration
Just question for my understanding: This is intended to make any tests noticed that should instead go to
Instead of splitting test classes for tg.ext, I would rather just disable network access for tg.ext-tests completely and mock every interaction by TG.
Which problem does this solve?
Sounds like an interseting thing to check out, however I see great value in testing on the default setups on the different os-ses.
Not sure if I see a notable benefit in this, but I have no objection if you can manage to work around tests cases like
sounds interesting :) feel free to try it out! |
Yes, and also there are currently some tests in there which don't access the network, but whose fixtures do access the network.
good idea. Ideally one should be able to test locally without network connection (<1 min with pytest-xdist) and be sure that the ones which do use the network pass.
hm, it is supposed to be faster (since there's no disk I/O) and safer, but we don't do much I/O so probably not worth it.
it is one of the possible reasons for the flakiness, and it is good practice to keep tests isolated from each other |
👍
👍
I was at least wondering, since we e.g. already use pytests tmp_path fixtures.
👍 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey Harshil, I had a look at the changes that are here so far.
What would you think of converting the wishlist in the comments into an issue and adressing them in multiple separate PR? that would give as the benefit of the individual points earlier, make reviewing probably easier and would also make it easier to distribute the open points onto different people :)
@@ -180,6 +180,23 @@ async def tz_bot(timezone, bot_info): | |||
return default_bot | |||
|
|||
|
|||
@pytest.fixture(scope="session") | |||
async def tz_bots(tzinfos, bot_info): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- I recommend to add a comment in this fixture about why we use a list here instead of properly parametrizing (speed)
- do we still need the
tz_bot
fixture for anything else, or can it be removed, then? - Can we make sure that the tests at least point out the timezone on failure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we still need the tz_bot fixture for anything else, or can it be removed, then?
we do still use it in the NonRequest
tests, where the time penalty is very negligible.
Can we make sure that the tests at least point out the timezone on failure?
that would be hard to do without making it even more complex, but let me have another look again
@@ -309,6 +326,18 @@ def tzinfo(request): | |||
return BasicTimezone(offset=datetime.timedelta(hours=hours_offset), name=request.param) | |||
|
|||
|
|||
@pytest.fixture(scope="session") | |||
def tzinfos(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the benefit of having this as a fixture? IISC it's used only in tz_bots
and could also just be hardcoded there?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mm yeah, a bit of a premature optimization at this stage in the PR, but yes I'll keep it mind
okay, though some of the points might be better done sequentially rather than concurrently otherwise one would have to rewrite / resolve many conflicts. I constantly have to test and rewrite many times before I get the iteration right. Anyway it's probably for the best that more than one person work on this. |
# Conflicts: # tests/test_bot.py
created a new issue for the other points. I will have to rebase this branch and remove commits which are not relevant to the purpose of this PR |
fair enough. Sorry I couldn't complete it over several months. |
Ideas implemented:
Benchmarks: