You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems to me we might be better if we queue up queries or begin calls and hand out connections to run those and if we have no more connections allow the queue to grow as big as it needs with some setting like connectionTimeoutMillis to time them out. If the query hits that amount of time waiting for a open connection then throwing timeout error on the pool.
I am new to this project but would love to contribute and add this but would the team be open to a change this big or not? At this point I am going to switch back to pg-promise because I really need this functionality. We want to run way more requests then available sql connects when we get spiky traffic. If I stayed using this library I would have to add my own pool in front of this library and manual connect and release to do this with this library. Or maybe I am missing something? I love how much faster and nicer this library is but that one thing is breaking this project I am working on now.
So what I am saying is if we get this error we don't fail but wait for another connect to be open to run the query. Maybe just log a warning. If another query is tried to start and we haven't hit our connection limit we will try to connect again but same thing in we queue up the query if no connects are available or it fails to connect for any reason.
The text was updated successfully, but these errors were encountered:
It seems to me we might be better if we queue up queries or begin calls and hand out connections to run those and if we have no more connections allow the queue to grow as big as it needs with some setting like connectionTimeoutMillis to time them out. If the query hits that amount of time waiting for a open connection then throwing timeout error on the pool.
I am new to this project but would love to contribute and add this but would the team be open to a change this big or not? At this point I am going to switch back to pg-promise because I really need this functionality. We want to run way more requests then available sql connects when we get spiky traffic. If I stayed using this library I would have to add my own pool in front of this library and manual connect and release to do this with this library. Or maybe I am missing something? I love how much faster and nicer this library is but that one thing is breaking this project I am working on now.
So what I am saying is if we get this error we don't fail but wait for another connect to be open to run the query. Maybe just log a warning. If another query is tried to start and we haven't hit our connection limit we will try to connect again but same thing in we queue up the query if no connects are available or it fails to connect for any reason.
The text was updated successfully, but these errors were encountered: