Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ideas for promoting ptrack's performance #24

Open
vegebird opened this issue Sep 1, 2022 · 0 comments
Open

Ideas for promoting ptrack's performance #24

vegebird opened this issue Sep 1, 2022 · 0 comments

Comments

@vegebird
Copy link
Contributor

vegebird commented Sep 1, 2022

Hello everyone,

I have some ideas to share with you, which may benefit ptrack. I'd like to get your response or comments before coding/PR, or is it in ptrack's development plans ?

Compress ptrack.map file.

ptrack.map.mmap is no longer used since changing to PG shared memory instead of mmap() system call, which saves the time to copy ptrack.map file when PG startup.

I have to set a bigger value for ptrack.map_size when $PGDATA is larger, e.g. 1024MB for 1T, 40960MB for 40T, in order to track the changed blocks and decrease hash collision. In this case, we need do a lot of IO for ptrack.map file when PG restarts or does checkpoint and will occupy lots of time. Furthermore, this may impact switchover/failover of PG cluster due to timeout.

How about to compress ptrack.map in shared memory before writing to physical file when doing checkpoint, and decompress ptrack.map into shared memory when loading physical file in starting up ?

Use multi-threads in ptrack_get_pagemapset() to scan files within $PGDATA concurrently.

If $PGDATA is bigger, looks like single process scanning files sequently is slow. I want to setup multi-threads when first call of ptrack_get_pagemapset(), worker of threads will do data file's scan/hash and build tuple into shared memory queue, which will be obtained by each call of ptrack_get_pagemapset() with proper mutex lock and condition variable.

Above are my thoughts, looking forward your comments.

Thanks,
vegebird

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant