-
-
Notifications
You must be signed in to change notification settings - Fork 448
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QPM Limit Bypass List apparently not being honored #1181
Comments
Thanks for reporting this. The bypass list is being honored but the limit logs are still being written based on the collected stats without checking for the bypass list. Will get this fixed in the next update by adding a check for bypass list. |
How/where did you get those logs?
Thank you in advance |
@bcookatpcsd these logs are in the DNS log file which can be viewed in the admin panel Logs section. |
🤦 I forgot about those.. ty (I knew that..) but recalling it was a problem.. lol |
I believe the default is 6000 I couldn't understand how the clients were doing 6k rq/m 600 is totally possible.. certainly within a /24 (defaults) (the drop is nxdomain to 0.0.0.0) but those top hosts were doing 10k until I found the backlog of 100 and raised it.. I think the 6k does not mean 6k.. as evidenced in the logs where it says 600 |
@bcookatpcsd The QPM limit is different for positive requests and for requests that generate error response. The error QPS is default set to 600 q/m. Its useful for authoritative DNS servers to rate limit resolvers who are causing too much error responses. |
OH.. so the errors is what caused the lock; not the query (assuming nxdomain is treated as a error which totaled more than 600 then causing the block?) (if so) that completely makes sense.. |
I thought I had everything (mostly) worked out and accounted for (yesterday) when I fully implemented.. I couldn't find those logs to show what those dropped queries were.. (couldn't find 'drop' in the administration screens..) then figured it must be the qpm limits.. (I think that was 500+/qps) (no gaps in the graphing.. nice) anyway.. thank you for all the details. Sorry for hijacking the thread.. Greatly appreciated. |
Thanks, @ShreyasZare! The norm would be to keep this issue open until you roll out the fix, correct? |
Only FormatError, ServerFailure, or Refused are considered as error responses. |
Dropped responses are not logged anywhere but only counted in stats. This is to avoid filling log file/db in case when there is some kind of attack.
Note that these QPM limits are not enforced per client but per subnet which default is set to /24. So it could be 2 or more clients in same subnet causing the limit to exceed.
You're welcome. |
You're welcome. Yes, keep it open. When the fix is available, I will post here and close the issue so that anyone tracking this issue gets a notification. |
Hi @ShreyasZare!
I have upgraded the DNS server on two VPS I use, from v12.2.1.0 to v13.1.1.0 (And v13 that added a feature to log clients which are being rate limited due to QPM).
Because of having set such a limit, for my own DNS resolver usage I have a script which checks when the ISP changes the IPv4 public address assigned to my router and if that has changed it accesses the DNS server API to update the "QPM Limit Bypass List" field.
This is the API call
https://${DnsServer}:53443/api/settings/set?token=${Token}&qpmLimitBypassList=${encodedIpv4},${encodedIpv6}
The script has been working as expected. (If I intentionally "mess up" the bypass config changing entries to something else the script "corrects" it)
By using v13, I noticed the defined subnet size for the public IPv4 I use in the router has been appearing in the logs like so:
I trimmed the logs - there are several other subnets being rate limited too, but it's well they are since there is no bypass configured for them (they're not related to the public IPv4 address I'm using)
The related QPM configuration is set like this:
Would you have any ideas on what could be wrong or suggestions on what to check for further troubleshooting?
Thanks!
The text was updated successfully, but these errors were encountered: