-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
add streaming types to ASF scaffold and APIs #6552
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really nice set of changes! This is basically fixing the last real big limitation we had in ASF! 🥳
I only had really small nitpicks and only on some tests.
It's cool that the parser and serializer basically didn't need to be changed.
The newly generated API seems great, and I double-checked that all method signatures that are implemented have also been adjusted.
e2bc2fd
to
ddd3fbf
Compare
tested merge with #6583 and seems to work fine |
ddd3fbf
to
0756cc0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
metric changes look good! 😄 thank you!
This PR updates ASF to correctly handle smithy streaming types. This is one of the last big changes to ASF, and will unblock the S3 migration.
The changes include:
MetricHandler
a bit (see details below) /cc @steffyPsome details on the API
After several discussions with @alexrashed, we decided not to make the service-level shapes the streaming types, but to only type hint the members of request/response objects.
typing.IO[bytes]
, which will essentially be set towerkzeug.Request.stream
by the parser. This will require the most changes, as we need to update all implementing services that expectpayload
to bebytes
, to usepayload.read()
which will be a stream.Union[bytes, IO[bytes], Iterable[bytes]
, making it possible to deal with the three cases outlined in implement AWS streaming trait for ASF #6527. These three types are also supported by thewerkzeug.Response
object, so nothing needs to be done in the serializer, and no changes are necessary im services.metric handler change
with the new streaming trait,
ServiceRequest
dictionaries can contain file-like or other IO objects that cannot be serialized. inrecord_parsed_request
we were copying the entire request using deep copy, and that was now raising errors. on certain requests.it seems we weren't really using the entire request, but just collecting the names of the parameters that were in the request, so i simplified the logic to just collect the parameter names rather than the entire request, which is now a harmless call to
list(request.keys())
.if we at some point need the entire request in the metric collection, we'll need to revisit this change.
limitations
http request IO is still a problem. when you consume an incoming request payload, e.g., in s3
PutObject
, usingbody.read()
, you are consuming the stream underlying the http request, which is shared with werkzeug's Request object. that means, if you at a later point want to callrequest.data
on the request object, you'll get an empty byte buffer back. although this is expected and correct behavior, it's also very inconvenient and i can see a lot of hours going into debugging of weird "why is my request empty" issues.since quite a bit of the code base still builds on the assumption of always having access to the raw http request data, we may need to make some compromises with respect to performance and memory usage. ideally, we never read from the stream until we need it, because there's always a chance that we are going to proxy large payloads to some backend (like invoking a lambda), in which case you don't want to load everything into memory, keep it there, and then flush it to the outgoing socket again. but if you want reliable access to
request.data
, even though we've previously runbody.read()
, then we're going to have to store the stream data somewhere, and make it seekable, e.g., throughTemporarySpooledFile
s, which would solve the problem of loading large payloads into memory (if they exceed a certain size the file is rolled to disk, otherwise it is kept in memory), but obviously still has the overhead when proxying. 🤷fixes #6527