You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Backups in the couch_server message queue are often due to frequent opens and closes, especially when the db handle lru is full and there are not enough idle handles to replace.
Disable idle_check_timeout. This setting was later removed altogether:[couchdb] idle_check_timeout = infinity
Try toggling update_lru_on_read: [couchdb] update_lru_on_read = false. If it was set to true, trying setting to false, and vice-versa. It's kind of dependent on your traffic pattern.
Increase the number of CPU (schedulers) available if possible.couch_server processes are sharded across the number of available schedulers. So having 32 schedulers vs 16 would spread the open calls and the lru across 32 couch_server processes.
Inspect your logs to see if there is anything timing out or crashing constantly.
We were able to halt/1 one of the couch VMs so we'll poke in the crash dump and see if anything jumps out.
I'm also working on a test tool to load couch up like we were seeing and try to make this happen reliably. Will take your points above and incorporate them into our configs.
We are seeing ever worsening performance in couch 3.3.3
Description
Over time queries to couch take longer and eventually start return 500s and we see perf continue to degrade.
We've found a process with a growing mailbox:
Looking at the linked processes we see a lot of db updates appearing to be stuck in do_call:
Steps to Reproduce
This develops over time but appears correlated with a number of tasks we run at the beginning of the month
Expected Behaviour
Don't lock up.
Your Environment
Additional Context
Its a 3-node cluster and we see this on all three nodes.
The text was updated successfully, but these errors were encountered: