Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[HttpClient] CurlHttpClient not closing file descriptors #60513

Closed
@digilist

Description

@digilist

Symfony version(s) affected

at least 6.4.x, 7.2.x

Description

When sending requests with the CurlHttpClient against different host names, the file descriptors are not garbage collected and eventually further requests might be blocked by hitting the open file descriptors limit (ulimit -n).

Consider this example:

<?php

use Symfony\Component\HttpClient\CurlHttpClient;

require 'vendor/autoload.php';

$requests = [
    'https://symfony.com/',
    'https://github.com/',
    'https://google.com/',
    'https://stackoverflow.com/',
    'https://spotify.com/',
];

$client = new CurlHttpClient();
foreach ($requests as $url) {
    $response = $client->request('GET', $url);
    dump($url . ' ' . $response->getStatusCode());

    $fdCount = count(scandir('/proc/self/fd'));
    dump("Open file descriptors: {$fdCount}");
}

Executing this script gives me the following output:

"https://symfony.com/ 200"
"Open file descriptors: 12"
"https://github.com/ 200"
"Open file descriptors: 13"
"https://google.com/ 200"
"Open file descriptors: 15"
"https://stackoverflow.com/ 200"
"Open file descriptors: 16"
"https://spotify.com/ 200"
"Open file descriptors: 19"

As you can see the number of open file descriptors is increasing, even though the response objects can be garbage collected.

Extending the requests array with further URLs on the same hosts will not increase the file descriptors, but adding further hosts will do so.

Once the number of file descriptors is reached, it is not possible to send further requests and I get the following error:

PHP Fatal error: Uncaught Symfony\Component\HttpClient\Exception\TransportException: Could not resolve host: {hostname}

It took me a while to pinpoint the DNS resolution error to the file descriptors and this is a very subtle bug.

A workaround that solved the issue for me was creating a new HTTP client every x requests to stay below my file descriptor limit, but that's not really a good solution and needs awareness of this issue.

How to reproduce

You can change the file descriptor limit for the current shell with e.g. ulimit -n 15. Afterwards, when you run the script above you should see the described error.

Instead of reducing the limit, you could also increase the number of hosts, but this might require a larger list depending on the current limit.

Possible Solution

No response

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions