Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RabbitMQ trigger supports priority queues #6500

Open
Gallardot opened this issue Jan 27, 2025 · 6 comments
Open

RabbitMQ trigger supports priority queues #6500

Gallardot opened this issue Jan 27, 2025 · 6 comments
Assignees
Labels
feature-request All issues for new features that have not been committed to needs-discussion

Comments

@Gallardot
Copy link

Proposal

RabbitMQ supports priority queues, and I would like to add a new mode for priority queues in QueueLength. This means that for priority queues, only messages with a priority greater than a specific level should be taken into account when calculating the queue length.

Use-Case

In common scenarios where priority queues are supported for task consumption, if there are many high-priority tasks, we would like to trigger elastic scaling to obtain more computing resources. For low-priority tasks, we can tolerate slower consumption to save computing resources.

Is this a feature you are interested in implementing yourself?

Yes

Anything else?

No response

@Gallardot Gallardot added feature-request All issues for new features that have not been committed to needs-discussion labels Jan 27, 2025
@Gallardot Gallardot changed the title RabbitMQ triggersupports priority queues RabbitMQ trigger supports priority queues Jan 27, 2025
@JorTurFer
Copy link
Member

Hello
I think that this will be a nice feature! I assign the issue to you :) (if you finally can't implement it by yourself, just let us know it)

@JorTurFer JorTurFer moved this from To Triage to In Progress in Roadmap - KEDA Core Feb 13, 2025
@Gallardot
Copy link
Author

Hello I think that this will be a nice feature! I assign the issue to you :) (if you finally can't implement it by yourself, just let us know it)

Hello @JorTurFer

Thank you for assigning this task to me! I will do my best to implement this feature. If I encounter any unclear points during the implementation, I hope I can get your assistance.

Thank you!

@Gallardot
Copy link
Author

Gallardot commented Feb 15, 2025

Hello, @JorTurFer

I originally planned to use the existing RabbitMQ HTTP API for this feature development, similar to the current RabbitMQ scaler implementation. From this HTTP API, we can actually obtain the number of messages of each priority in the queue from backing_queue_status.priority_lengths, which allows us to complete subsequent calculations and judgments. You can refer to the data sample I provided below for backing_queue_status.priority_lengths. My RabbitMQ server is version 3.8.

curl http://rabbitmq-server/api/queues/%2F

[
  {
    "arguments": {
      "x-max-priority": 10
    },
    "auto_delete": false,
    "backing_queue_status": {
      "avg_ack_egress_rate": 0,
      "avg_ack_ingress_rate": 0,
      "avg_egress_rate": 0,
      "avg_ingress_rate": 0,
      "delta": [
        "delta",
        "todo",
        "todo",
        "todo",
        "todo"
      ],
      "len": 3,
      "mode": "default",
      "next_deliver_seq_id": 0,
      "next_seq_id": 3,
      "num_pending_acks": 0,
      "num_unconfirmed": 0,
      "priority_lengths": {
        "0": 2,
        "1": 0,
        "2": 0,
        "3": 0,
        "4": 1,
        "5": 0,
        "6": 0,
        "7": 0,
        "8": 0,
        "9": 0,
        "10": 0
      },
      "q1": 0,
      "q2": 0,
      "q3": 1,
      "q4": 0,
      "qs_buffer_size": 0,
      "target_ram_count": "infinity",
      "version": 1
    },
    "consumer_capacity": 0,
    "consumer_utilisation": 0,
    "consumers": 0,
    "durable": true,
    "effective_policy_definition": {

    },
    "exclusive": false,
    "exclusive_consumer_tag": null,
    "garbage_collection": {
      "fullsweep_after": 65535,
      "max_heap_size": 0,
      "min_bin_vheap_size": 46422,
      "min_heap_size": 233,
      "minor_gcs": 37
    },
    "head_message_timestamp": null,
    "idle_since": "2025-02-15T15:30:50.243+00:00",
    "memory": 52448,
    "message_bytes": 18,
    "message_bytes_paged_out": 0,
    "message_bytes_persistent": 18,
    "message_bytes_ram": 6,
    "message_bytes_ready": 18,
    "message_bytes_unacknowledged": 0,
    "message_stats": {
      "publish": 3,
      "publish_details": {
        "rate": 0
      }
    },
    "messages": 3,
    "messages_details": {
      "rate": 0
    },
    "messages_paged_out": 0,
    "messages_persistent": 3,
    "messages_ram": 1,
    "messages_ready": 3,
    "messages_ready_details": {
      "rate": 0
    },
    "messages_ready_ram": 1,
    "messages_unacknowledged": 0,
    "messages_unacknowledged_details": {
      "rate": 0
    },
    "messages_unacknowledged_ram": 0,
    "name": "qqq",
    "node": "rabbit@my-rabbit",
    "operator_policy": null,
    "policy": null,
    "recoverable_slaves": null,
    "reductions": 88297,
    "reductions_details": {
      "rate": 0
    },
    "single_active_consumer_tag": null,
    "state": "running",
    "type": "classic",
    "vhost": "/"
  }
]

Unfortunately, I found that starting from version 3.12.7, RabbitMQ server removed the backing_queue_status related content from the HTTP API. You can check the relevant issues and PR discussions in the RabbitMQ community for details rabbitmq/rabbitmq-server#9437 rabbitmq/rabbitmq-server#9627 and rabbitmq/rabbitmq-server#9578 . The RabbitMQ community believes that the backing_queue_status data impacts the service and is not very useful, so they removed it.

Therefore, I would like to consult the KEDA community for their opinion. Given our current needs, should we continue with the implementation, or should we communicate with the RabbitMQ community to see if there are other ways to obtain this data, or should we abandon the current implementation?

Personally, I am inclined to continue with the implementation, based on the backing_queue_status in the current 3.x version HTTP API. If the RabbitMQ community can provide a new way, we can implement a compatible solution. If not, we can degrade this to the same calculation rules as the original QueueLength mode when users are using a new version rabbitmq server.

What do you think?

@JorTurFer
Copy link
Member

Do you need to use the HTTP protocol to get that metric? Then sadly we shouldn't support it :(
RabbitMQ team deprecated the HTTP protocol and v4 doesn't support it -> #6071

@Gallardot
Copy link
Author

Do you need to use the HTTP protocol to get that metric? Then sadly we shouldn't support it :(
RabbitMQ team deprecated the HTTP protocol and v4 doesn't support it -> #6071

@JorTurFer Thank you for your information. In order to implement this feature, I need to get this data through the http api, and so far I only found that this HTTP API has this data.

Now that the http api is no longer supported in the rabbitmq V4 version, I might want to check to see if the rabbitmq prometheus exporter has any indicators for that

@JorTurFer
Copy link
Member

Yeah, most probably Prometheus is the way to go in the long term

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Projects
Status: In Progress
Development

No branches or pull requests

2 participants