Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Fix shmem allocation size. #42

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 31, 2022
Merged

Conversation

rjuju
Copy link
Contributor

@rjuju rjuju commented Mar 25, 2022

MaxBackends is still 0 when _PG_init() is called, which means that we
don't request enough memory in RequestAddinShmemSpace(), while the rest of
the code sees (and allocate) a correct value.

It's technically usually not a problem as postgres adds an extra 100kB of
memory for small unaccounted memory usage, but it's better to avoid relying on
it too much.

Note that the value is still not guaranteed to be exact as other modules
_PG_init() could later change the underlying GUCs, but there is not available
API to handle that case accurately.

@maksm90
Copy link
Collaborator

maksm90 commented Mar 25, 2022

@rjuju thanks for so swift fix of #41 . Review it shortly

@maksm90
Copy link
Collaborator

maksm90 commented Mar 25, 2022

Some notes about the relevance of this kind of estimation of max processes in pg_wait_sampling module.

  • This value is used to allocate shared memory just for storing queryId (in proc_queryids array) as in pg_stat_kcache module. But instead of pg_stat_kcache here we assign one slot for queryId to one slot of PGPROC therefore we have to use formula for the size of ProcGlobal->allProcs array. And the last sum item max_prepared_xacts might be removed as we access to proc_queryids array from regular backends and iterate over it up to ProcGlobal->allProcCount index value that is without this item.
    Generally it's enough to allocate MaxBackends-sized array and access to the specific slot via backendId-1 index value similar as it's done in pg_stat_kcache module.
  • Starting from PG14 we might use queryId value from PgBackendStatus entry of shared BackendStatusArray and don't allocate own shared memory for this purpose. @rjuju what do you think what if we rewrote the storing and extracting of queryId in such way in future?

@rjuju
Copy link
Contributor Author

rjuju commented Mar 26, 2022

And the last sum item max_prepared_xacts might be removed as we access to proc_queryids array from regular backends and iterate over it up to ProcGlobal->allProcCount index value that is without this item.

Agreed, and the code should have a comment saying that the value has to be synced with ProcGlobal->allProcCount initialization in InitProcGlobal(), so that it's clear to anyone looking at the code why this value is used.

Starting from PG14 we might use queryId value from shared BackendStatusArray and don't allocate own shared memory for queryId values, in general. @rjuju what do you think what if we rewrote the storing and extracting of queryId in such way in future?

Unfortunately it's not possible :( The community insisted to report only the top-level queryid there, for consistency with e.g. pg_stat_activity.query, even if I initially suggested to report the current query. So we, extension owners, still have to allocate and handle our own queryid array. Unfortunately there isn't even a simple way to do it only once and not once per extension.

@maksm90
Copy link
Collaborator

maksm90 commented Mar 26, 2022

the code should have a comment saying that the value has to be synced with ProcGlobal->allProcCount initialization in InitProcGlobal(), so that it's clear to anyone looking at the code why this value is used.

Yeah, it's a good idea to document the formula in that way.

Unfortunately it's not possible :( The community insisted to report only the top-level queryid there, for consistency with e.g. pg_stat_activity.query, even if I initially suggested to report the current query. So we, extension owners, still have to allocate and handle our own queryid array.

It's a pity.

Unfortunately there isn't even a simple way to do it only once and not once per extension.

Yes, in essence, multiple extensions have to maintain its own storage for queryIds of currently executing queries for sharing these values between processes. And clearly, this is redundantly.

@rjuju rjuju force-pushed the fix_maxbackends branch from 28f6e20 to 2b93374 Compare March 27, 2022 13:00
@rjuju
Copy link
Contributor Author

rjuju commented Mar 27, 2022

I just force-pushed the modifications we discussed. I tried to describe the problem with all consequences, so that if anyone tries to read or use the code for something else they will have all the information needed to know if that approach will work for them too.

MaxBackends is still 0 when _PG_init() is called, which means that we
don't request enough memory in RequestAddinShmemSpace(), while the rest of
the code sees (and allocate) a correct value.

It's technically usually not a problem as postgres adds an extra 100kB of
memory for small unaccounted memory usage, but it's better to avoid relying on
it too much.

Note that the value is still not guaranteed to be exact as other modules
_PG_init() could later change the underlying GUCs, but there is not available
API to handle that case accurately.
@rjuju rjuju force-pushed the fix_maxbackends branch from 2b93374 to 277a4e5 Compare March 27, 2022 13:03
Copy link
Collaborator

@maksm90 maksm90 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just force-pushed the modifications we discussed. I tried to describe the problem with all consequences, so that if anyone tries to read or use the code for something else they will have all the information needed to know if that approach will work for them too.

Excellent! Thanks a lot. Approved.

@keremet keremet merged commit 680f8db into postgrespro:master Mar 31, 2022
@rjuju
Copy link
Contributor Author

rjuju commented Mar 31, 2022

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants