Skip to content

Conversation

@ghimire
Copy link

@ghimire ghimire commented Sep 26, 2024

This pull request introduces support for storing the persistent state of Flower in Redis, as an alternative to the existing file-based storage mechanism. This enhancement aims to improve the reliability, scalability, and performance of state management in Flower.

This removes the limitation that prevented running multiple instances of Flower concurrently by centralizing state management in Redis.

@ghimire
Copy link
Author

ghimire commented Sep 26, 2024

cc: @mher

@iloveitaly
Copy link

This is a great idea. Makes it much easier to run flower in a containerized environment.

What needs to be done here to merge this in, @mher?

@iloveitaly
Copy link

iloveitaly commented Apr 3, 2025

This removes the limitation that prevented running multiple instances of Flower concurrently by centralizing state management in Redis.

the other big benefit here is state is not lost when the container restarts (if you don't have a persistent volume attached to your container).

Created a PR for rediss support here

@ghimire
Copy link
Author

ghimire commented Sep 23, 2025

@Nusnus Bump

@auvipy auvipy requested review from auvipy and Copilot September 23, 2025 12:16
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds Redis support as an alternative backend for storing Flower's persistent state, replacing the existing file-based storage mechanism. This enhancement enables running multiple instances of Flower concurrently by centralizing state management in Redis.

Key changes:

  • Added Redis client initialization and state management logic
  • Implemented Redis-based state loading and saving with pickle serialization
  • Updated documentation to reflect Redis backend support

Reviewed Changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.

File Description
flower/events.py Added Redis client initialization, state loading/saving logic with pickle serialization
docs/config.rst Updated documentation to mention Redis backend support

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

@ghimire
Copy link
Author

ghimire commented Oct 9, 2025

@auvipy @Nusnus Bump

Copy link
Collaborator

@auvipy auvipy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this will also need unit tests to verify the proposed changes

@ghimire
Copy link
Author

ghimire commented Oct 13, 2025

@auvipy Unit tests have been added for Redis events state storage in 0192efc.

@ghimire ghimire requested a review from auvipy October 13, 2025 19:41
@ghimire
Copy link
Author

ghimire commented Oct 28, 2025

@auvipy @Nusnus

The flower/events.py:24:0: C0103: Variable name "PROMETHEUS_METRICS" doesn't conform to snake_case naming style (invalid-name) lint error is outside of the scope of this PR.

@mher
Copy link
Owner

mher commented Oct 28, 2025

Saving the entire state with a single redis_client.set is not scalable.

@ghimire
Copy link
Author

ghimire commented Oct 29, 2025

Saving the entire state with a single redis_client.set is not scalable.

@mher Agreed, but Flower isn't scalable as it is, due to a file based state. I don't think having this Redis alternative makes it any worse. The flower database gets corrupted more frequently than one would hope. Fixing it in Redis is much easier, and it also means the Flower container doesn't need to be restarted.

Do you have any other plans or thoughts on moving the state out of filesystem to make it truly scalable?

I do think that this implementation is a start in that direction. Please share any suggestions you might have to improve the current implementation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants