-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Environment details
- OS: linux / centos7
- Java version: 1.9
- google-cloud-java version(s):
- com.google.cloud:google-cloud-storage:1.52.0
- com.google.cloud:google-cloud-core-http:1.52.0
- org.apache.httpcomponents:httpclient:4.5.5
Steps to reproduce
At a high level we're trying to upload 100s of objects to GCS ~quickly. We want to do this using an Apache HTTP client because we need to use a custom resolver. That said, I think this is broken on both Apache and other http clients, it just may not manifest itself the same way.
- Use a custom transport factory builder in order to use Apache HTTP Client
- Allocate a PoolingHttpClientConnectionManager in your transport, set the pool size to 1
- Instantiate the Storage service using this transport factory
- Allocate a write channel
- Write to it
- Close it
- Allocate another write channel
- Try to write to it.
Under the hood the client will try to find an available connection from the pool and fail. It thinks the one we used earlier is still in use.
If you run ss or netstat on the machine at this point you can see an established connection to GCS with ~165 bytes sitting in the socket's receive buffer.
The reason is that we don't read the body of the response. See here:
The response in both success and exception cases is discarded without ever reading or closing the response.
HttpRequest's execute() method pretty clearly states we need to do something with the response: