-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RabbitMQ trigger supports priority queues #6500
Comments
Hello |
Hello @JorTurFer Thank you for assigning this task to me! I will do my best to implement this feature. If I encounter any unclear points during the implementation, I hope I can get your assistance. Thank you! |
Hello, @JorTurFer I originally planned to use the existing RabbitMQ HTTP API for this feature development, similar to the current RabbitMQ scaler implementation. From this HTTP API, we can actually obtain the number of messages of each priority in the queue from curl http://rabbitmq-server/api/queues/%2F
[
{
"arguments": {
"x-max-priority": 10
},
"auto_delete": false,
"backing_queue_status": {
"avg_ack_egress_rate": 0,
"avg_ack_ingress_rate": 0,
"avg_egress_rate": 0,
"avg_ingress_rate": 0,
"delta": [
"delta",
"todo",
"todo",
"todo",
"todo"
],
"len": 3,
"mode": "default",
"next_deliver_seq_id": 0,
"next_seq_id": 3,
"num_pending_acks": 0,
"num_unconfirmed": 0,
"priority_lengths": {
"0": 2,
"1": 0,
"2": 0,
"3": 0,
"4": 1,
"5": 0,
"6": 0,
"7": 0,
"8": 0,
"9": 0,
"10": 0
},
"q1": 0,
"q2": 0,
"q3": 1,
"q4": 0,
"qs_buffer_size": 0,
"target_ram_count": "infinity",
"version": 1
},
"consumer_capacity": 0,
"consumer_utilisation": 0,
"consumers": 0,
"durable": true,
"effective_policy_definition": {
},
"exclusive": false,
"exclusive_consumer_tag": null,
"garbage_collection": {
"fullsweep_after": 65535,
"max_heap_size": 0,
"min_bin_vheap_size": 46422,
"min_heap_size": 233,
"minor_gcs": 37
},
"head_message_timestamp": null,
"idle_since": "2025-02-15T15:30:50.243+00:00",
"memory": 52448,
"message_bytes": 18,
"message_bytes_paged_out": 0,
"message_bytes_persistent": 18,
"message_bytes_ram": 6,
"message_bytes_ready": 18,
"message_bytes_unacknowledged": 0,
"message_stats": {
"publish": 3,
"publish_details": {
"rate": 0
}
},
"messages": 3,
"messages_details": {
"rate": 0
},
"messages_paged_out": 0,
"messages_persistent": 3,
"messages_ram": 1,
"messages_ready": 3,
"messages_ready_details": {
"rate": 0
},
"messages_ready_ram": 1,
"messages_unacknowledged": 0,
"messages_unacknowledged_details": {
"rate": 0
},
"messages_unacknowledged_ram": 0,
"name": "qqq",
"node": "rabbit@my-rabbit",
"operator_policy": null,
"policy": null,
"recoverable_slaves": null,
"reductions": 88297,
"reductions_details": {
"rate": 0
},
"single_active_consumer_tag": null,
"state": "running",
"type": "classic",
"vhost": "/"
}
] Unfortunately, I found that starting from version 3.12.7, RabbitMQ server removed the Therefore, I would like to consult the KEDA community for their opinion. Given our current needs, should we continue with the implementation, or should we communicate with the RabbitMQ community to see if there are other ways to obtain this data, or should we abandon the current implementation? Personally, I am inclined to continue with the implementation, based on the What do you think? |
Do you need to use the HTTP protocol to get that metric? Then sadly we shouldn't support it :( |
@JorTurFer Thank you for your information. In order to implement this feature, I need to get this data through the http api, and so far I only found that this HTTP API has this data. Now that the http api is no longer supported in the rabbitmq V4 version, I might want to check to see if the rabbitmq prometheus exporter has any indicators for that |
Yeah, most probably Prometheus is the way to go in the long term |
Proposal
RabbitMQ supports priority queues, and I would like to add a new mode for priority queues in QueueLength. This means that for priority queues, only messages with a priority greater than a specific level should be taken into account when calculating the queue length.
Use-Case
In common scenarios where priority queues are supported for task consumption, if there are many high-priority tasks, we would like to trigger elastic scaling to obtain more computing resources. For low-priority tasks, we can tolerate slower consumption to save computing resources.
Is this a feature you are interested in implementing yourself?
Yes
Anything else?
No response
The text was updated successfully, but these errors were encountered: