Thanks to visit codestin.com
Credit goes to github.com

Skip to content

[ElasticsearchLogstashHandler] There seems to be an issue with the version 5.3.11 of http-client ? #44334

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
adaniloff opened this issue Nov 29, 2021 · 12 comments

Comments

@adaniloff
Copy link

adaniloff commented Nov 29, 2021

Symfony version(s) affected

5.3.11

Description

Hello there ; we recently upgrade from 5.2.12 to 5.3.11.

I'm not really sure what's happening... but logging in ES with curl + ElasticsearchLogstashHandler is now causing some trouble.

Our usecase

In our case, while running doctrine:fixture:load --append, some (info level) logs are generated. However, we now got an error related to the http-client package (because of those logs).

Our investigation

The issue seems to happen when the stack tries to push logs through the curl + ElasticsearchLogstashHandler package in monolog-bridge + http-client.

The error is introduced because of / thanks to symfony/http-client@cf34137, but I doubt this commit is the real issue.

Sometimes, we even get the following error:

Could not push logs to Elasticsearch:
Symfony\Component\HttpClient\Exception\TimeoutException: Idle timeout reached for "http://elastic:changeme@elasticsearch:9200/_bulk". in /srv/api/vendor/symfony/http-client/Chunk/ErrorChunk.php:65
Stack trace:
#0 /srv/api/vendor/symfony/monolog-bridge/Handler/ElasticsearchLogstashHandler.php(157): Symfony\Component\HttpClient\Chunk\ErrorChunk->isFirst()
#1 /srv/api/vendor/symfony/monolog-bridge/Handler/ElasticsearchLogstashHandler.php(147): Symfony\Bridge\Monolog\Handler\ElasticsearchLogstashHandler->wait(true)
#2 [internal function]: Symfony\Bridge\Monolog\Handler\ElasticsearchLogstashHandler->__destruct()
#3 {main}

I made a repository https://github.com/adaniloff/symfony-http-client-issue/blob/main/README.md with a docker stack to reproduce, since the case is not easy to setup (... it requires a specific version of SF5 + ES + POSTing logs to ES...).

How to reproduce

Reproducing the issue requires:

  • SF http-client 5.3.11 + monolog-bridge ^3.0
  • an elasticsearch client
  • a monolog handler which forward info level logs to the elasticsearch client

See https://github.com/adaniloff/symfony-http-client-issue/blob/main/README.md for an easy setup (requires docker/docker-compose).

Possible Solution

No response

Additional Context

No response

@nicolas-grekas
Copy link
Member

Thanks for the reproducer!
I have could not find driver when I follow your instructions :(
But now I read that the reproducer could be simplified.
Can you do it please? The smaller the better.

@adaniloff
Copy link
Author

adaniloff commented Nov 29, 2021

Thanks for the reproducer! I have could not find driver when I follow your instructions :( But now I read that the reproducer could be simplified. Can you do it please? The smaller the better.

Well, I'm on it. I made an update (not pushed on my repo yet) but the issue does not happen anymore (I did composer remove the doctrine fixtures bundle) !

This is really weird ... I will try to push my tests further...

Regarding your error, do you see those messages (especially the "Idle Timeout" one) like on this picture:

Capture d’écran de 2021-11-29 18-53-34

If yes, the issue also occurs to you, even if the setup is not working as expected: since the driver is not found, a warning/error message is triggered and pushed to the ES stack.

edit: have you done the make up && make init && make install thing ? If not, this may be why the stack is not working well

@adaniloff
Copy link
Author

adaniloff commented Nov 29, 2021

I reproduced the issue, without the database.

You can clone https://github.com/adaniloff/symfony-http-client-issue/blob/feat/only-the-cmd/README.md (another branch of the same repo).

First: clean your containers with make down.

Then do the following:

  • checkout on the branch specified above
  • make init && make up && make install
  • docker-compose exec php bin/console cache:clear
  • docker-compose exec php bin/console app:orange-gnome --level=warning

The last cmd should / may be ran multiple times before the issue happens. If it the issue does not occur, try a cache clear and re-exec the command.

edit: I've updated the README.md on the repo itself for further explanations

edit2: the issue seems to appear 'randomly'. It cleary seems related to the time taken to push the logs into the ES stack. In the ElasticsearchLogstashHandler class, the following seems to be the issue:

// in the "wait" method
$this->client->stream($this->responses, $blocking ? null : 0.0) // the value "0.0" seems to be causing the issue

@adaniloff
Copy link
Author

Hi, any news on this issue ?
I'd be happy to contribute but I've no clue what's really going on.

@m-wisniewski
Copy link

Hi! I'd like to notice, that the same problem appears in my project, but I'm using PHP 8.0.13, Symfony 5.4.0, and http-client 5.4.0. I'm using docker as well, with Symfony, caddy, rabbitmq, elasticsearch, and kibana services separated individually in containers.
I've noticed the very same thing - sometimes the problem does not occur, but there is no schema.
Both elasticsearch and PHP containers do not seem to log the problem, the error message shows up in the console during any action, that will end in logging debug info (simple cache clearing or composer require {package}. However, when I'm purposely logging a critical error, it seems to be sent to elasticsearch correctly - it's being handled and shows up in kibana - only the bulk actions are the problem.

@adaniloff
Copy link
Author

@m-wisniewski you can try to composer require http-client:5.2.* to temporarily fix your issue. This is not ideal, but it works 🤷‍♂️.

@m-wisniewski
Copy link

@adaniloff Actually I've disabled sending debug level logs to elasticsearch handler. I still have those saved to file and kinda don't need it for now, since I'm working on the project locally and have xdebug. Nonetheless, thanks a lot for your advice!

@nicolas-grekas
Copy link
Member

Thanks for the reproducer, that allowed me to focus on finding a fix (which took me a few hours and will lead to a few more hours to try removing a circular loop I found meanwhile)

See #44601

Dropping https://github.com/sponsors/nicolas-grekas/ here just in case ;)

nicolas-grekas added a commit that referenced this issue Dec 13, 2021
…struct (nicolas-grekas)

This PR was merged into the 4.4 branch.

Discussion
----------

[HttpClient] Fix closing curl-multi handle too early on destruct

| Q             | A
| ------------- | ---
| Branch?       | 4.4
| Bug fix?      | yes
| New feature?  | no
| Deprecations? | no
| Tickets       | Fix #44334
| License       | MIT
| Doc PR        | -

For some reason, the garbage collector can decide to destruct the `CurlClientState` before the responses that reference them.
When this happens, the curl-multi handle is closed and responses end up in a broken state.
This fixes it by not closing the multi-handle on destruct/reset.

This also fixes configuring the multi-handle on reset.

Commits
-------

c0602fd [HttpClient] Fix closing curl-multi handle too early on destruct
@adaniloff
Copy link
Author

Thanks @nicolas-grekas for the fix. By the way great job at the Symfony Live 👍

@adaniloff
Copy link
Author

Sorry to repost but we're still having the issue.
We tried to composer update to latest version of symfony (both 5.3.x and 5.4.x) but we still got the error.

Has the fix been deployed ? Could we have more visibility (since this bug ticket is now closed, I don't really know where I can get the info) about that ?

I'm not trying to push you to do things or anything, I just would like to know if I misundertood something since the only merge request that I can see is on 4.4 ?

Thanks a lot for your time

@derrabus
Copy link
Member

The bugfix has been merged to all maintained branches, but did not ship in a stable release yet. Does that answer your question?

@adaniloff
Copy link
Author

The bugfix has been merged to all maintained branches, but did not ship in a stable release yet. Does that answer your question?

Yes this does ; thanks to the both of you for your patience. Bonnes fêtes de fin d'année 🎅 🎁 !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants