You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We're using core-dump-handler chart version v9.0.0 and running it on AWS EKS v1.31. Although we have increased request_mem and limit_mem to 256Mi, we are still having core-dump-handler pods get OOM-killed. Is this related to the new EKS version or the new core-dump-handler version? Thanks!
The text was updated successfully, but these errors were encountered:
@No9 We haven't checked the logs but the issue was resolved by rolling back the version to v8.10.0.
Here's the detailed log from the OOM-killed pod:
Process core-dump-agent (pid: 954575, oom_score: 132118, oom_score_adj: 998) triggered an OOM kill on itself. The process had reached 65536 pages in size.
Thanks very much - looking at the diff between 9.0.0 and 8.10.0 v8.10.0...v9.0.0
The main changes in the agent was for deployment parameters, a change in the inotify api and a bump in dependencies.
A candidate here is the tokio-scheduler dependency bump.
I'll downgrade the changes and publish 9.1
We're using core-dump-handler chart version v9.0.0 and running it on AWS EKS v1.31. Although we have increased
request_mem
andlimit_mem
to 256Mi, we are still having core-dump-handler pods get OOM-killed. Is this related to the new EKS version or the new core-dump-handler version? Thanks!The text was updated successfully, but these errors were encountered: