Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

High TCP (Control )Ping but UDP ping Stable #6688

Closed
ravicant opened this issue Jan 9, 2025 · 13 comments
Closed

High TCP (Control )Ping but UDP ping Stable #6688

ravicant opened this issue Jan 9, 2025 · 13 comments
Labels
stale-support This issue hasn't seen any activity in recent time and will probably be closed soon. support

Comments

@ravicant
Copy link

ravicant commented Jan 9, 2025

The issue

My Mumble server is suddenly facing an issue where all users keep disconnecting or continuously reconnecting. I am noticing high TCP ping, but the UDP connection remains stable. This problem affects all users.

Note : I run a FIVEM Server which which uses mumble for voice chat and i get almost like 220 player always playing so i am not able to figure out what the issue is cz there is no ddos
image

Mumble version

1.4.230

Mumble component

Server

OS

Linux

Additional information

Image have all the info my Client us 1.5.735
image

@Krzmbrzl
Copy link
Member

Krzmbrzl commented Jan 9, 2025

In cases where the Mumble TCP connection gets slow, do other TCP connections suffer from the same problem? I.e. is this problem even Mumble specific?

If it is, have you had the server running with this amount of users on the given hardware before? It might be that it is indeed Mumble itself that can't keep up with processing TCP packets. That would normally depend on the compute power of the used hardware.

@ravicant
Copy link
Author

ravicant commented Jan 9, 2025

In cases where the Mumble TCP connection gets slow, do other TCP connections suffer from the same problem?

Yes

is this problem even Mumble specific?

Yes

have you had the server running with this amount of users on the given hardware before?

yes

It might be that it is indeed Mumble itself that can't keep up with processing TCP packets. That would normally depend on the compute power of the used hardware.

I have a really good Hardware where we are hosting this mumble server in docker
SPecs :
Intel(R) Xeon(R) E-2386G CPU @ 3.50GHz 12 Cores
Ram : 64gb
1gbps up and down

We never had this issue ever

  • I think its happening cz when 200 people getting disconnected together and trying to connect again mumble ratelimiting?

@Krzmbrzl
Copy link
Member

Well if TCP connectivity in general has a high ping, then this is very unlikely to be a Mumble issue. This sounds more like a general network issue then.

Mumble does apply rate limiting but on a per-user basis, not globally across the server so this does not explain the issue you are seeing.

To me it sounds that for some reason the TCP connectivity to your server suffersand as a consequence the ping gets so large that clients time out. Meanwhile, the general network connectivity of the server seems to remain good, given that UDP seems to be working.

Side-Note: atm the statistics shown in Mumble are only meaningful if you haven't been connected to the server for too long in a row because the statistics are an average over the entire connection time.

@ravicant
Copy link
Author

ravicant commented Jan 10, 2025

Is it my Mumble config? Because I have this Mumble behind the proxy, so everyone connects through the proxy. But when this issue happens, even if I try the main IP, it’s the same for every new or old user with high ping in TCP.

These are my configs:

  • MUMBLE_CONFIG_BANDWIDTH=128000
  • MUMBLE_CONFIG_USERS=2048
  • MUMBLE_CONFIG_MESSAGEBURST=5000000
  • MUMBLE_CONFIG_MESSAGELIMIT=1000000
  • MUMBLE_CONFIG_OPUSTHRESHOLD=0
  • MUMBLE_CONFIG_CHANNELNAME=(.*)+
  • MUMBLE_CONFIG_USERNAME=(.*)+
  • MUMBLE_CONFIG_AUTOBANATTEMPTS=0
  • MUMBLE_CONFIG_AUTOBANSUCCESSFULCONNECTIONS=false

So, might I have to increase something? Because I think this issue only happens when everyone gets disconnected somehow and tries to reconnect. I assume it might be a DDoS on the server, as more than 100 people are trying to connect.

What I’ve tried:
I thought it might be the server, so I turned off the old server, made a new server, and didn’t make changes to the proxy. I checked the ping and everything, and there was no issue. Then I decided to forward all those users to the new server I created. The moment I did that, the whole server started lagging, and the TCP ping went up.

Then, I decided to restart my FiveM server so everyone could connect one by one. After that, I didn’t even have to restart the Mumble server. Once everyone was disconnected and reconnecting slowly, the server was working fine.

@Krzmbrzl
Copy link
Member

Is it my Mumble config?

I don't think so but to make sure you could try whether using a default config file resolves the issue 🤔

So, might I have to increase something? Because I think this issue only happens when everyone gets disconnected somehow and tries to reconnect. I assume it might be a DDoS on the server, as more than 100 people are trying to connect.

In principle this sounds like a reasonable explanation. However, I would be surprised to see that a Mumble server can be DDoS'd by only 100 clients trying to connect. The only bottleneck that I can think of that might cause this would be the database connection as every client authentication will require at least one database call, which will block the TCP processing thread until it has completed.
If this is slow enough, TCP processing might indeed be slowed down sufficiently to make clients timeout and disconnect (at which point their client will try to automatically re-connect, keeping the DDos going)...

What database are you using?

Then, I decided to restart my FiveM server so everyone could connect one by one. After that, I didn’t even have to restart the Mumble server. Once everyone was disconnected and reconnecting slowly, the server was working fine.

This would fit the "authenticate-bottleneck" described above 🤔

Copy link

As there has been no activity on this issue for a couple of days, we assume that your issue has been fixed in the meantime.
Should this not be the case, please let us know.

If no further activity happens, this issue will be closed within 3 days.

@github-actions github-actions bot added the stale-support This issue hasn't seen any activity in recent time and will probably be closed soon. label Jan 16, 2025
@ravicant
Copy link
Author

ravicant commented Jan 16, 2025

What database are you using?

We use Docker https://github.com/mumble-voip/mumble-docker whatever db it uses we do to So i am assuming Default

And i tried fixing the issue cz it was creating lot of issue then i Used this : https://github.com/AvarianKnight/ZUMBLE which basically eliminated the issue but sometime client will create these issues

Image

^^^^^^^^^^
These issue i get on Zumble but i have never got the same issue on normal Mumble docker

@github-actions github-actions bot removed the stale-support This issue hasn't seen any activity in recent time and will probably be closed soon. label Jan 17, 2025
@Krzmbrzl
Copy link
Member

We use Docker https://github.com/mumble-voip/mumble-docker whatever db it uses we do to So i am assuming Default

Okay, that means you're using an SQLite database.

These issue i get on Zumble but i have never got the same issue on normal Mumble docker

Then this sounds like an issue that should be reported to the maintainers of Zumble.

@ravicant
Copy link
Author

@Krzmbrzl So do you have recommendation which database i should use.

@Krzmbrzl
Copy link
Member

SQLite would have been what I recommend 🤷

@ravicant
Copy link
Author

I used MUMBLE_CONFIG_SQLITE_WAL = 1 and it fixed the issue

@Krzmbrzl
Copy link
Member

Oh okay, that's interesting. Thanks for reporting back! 👍

Copy link

As there has been no activity on this issue for a couple of days, we assume that your issue has been fixed in the meantime.
Should this not be the case, please let us know.

If no further activity happens, this issue will be closed within 3 days.

@github-actions github-actions bot added the stale-support This issue hasn't seen any activity in recent time and will probably be closed soon. label Jan 25, 2025
@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale-support This issue hasn't seen any activity in recent time and will probably be closed soon. support
Projects
None yet
Development

No branches or pull requests

2 participants