-
Notifications
You must be signed in to change notification settings - Fork 0
Add architecture documentation and refresh README #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Add architecture documentation and refresh README #6
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds comprehensive documentation to the collaborative SLAM exploration project and refreshes the README with accurate setup and usage information.
- Adds a high-level architecture document with Mermaid diagrams explaining system components and interactions
- Includes a lessons learned playbook capturing the reproducible approach for implementing saga patterns
- Updates README with streamlined setup instructions and better project organization
Reviewed Changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| docs/lessons_learned.md | New playbook documenting the step-by-step approach and insights for reproducing the saga architecture |
| docs/high_level_design.md | New architecture documentation with Mermaid diagrams and component descriptions |
| README.md | Refreshed with accurate setup instructions, clearer project structure, and updated testing guidance |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| 2. Build a chain with explicit `link_error` callbacks so compensations fire automatically when downstream tasks fail. 【F:app/flows/mission_start_celery/orchestrator.py†L32-L50】 | ||
| 3. Generate saga-scoped correlation IDs to trace every command and reply. 【F:app/flows/mission_start_celery/orchestrator.py†L23-L31】 | ||
|
|
||
| **Insight:** Treat compensations as first-class Celery tasks. This keeps orchestration declarative and makes failure handling observable through Flower. | ||
|
|
||
| ## 2. Decouple Work through Redis Streams | ||
|
|
||
| 1. Publish commands with `request_and_reply` so Celery tasks and async handlers communicate via Redis Streams. 【F:app/flows/mission_start_celery/tasks.py†L18-L78】 | ||
| 2. Use consumer groups per handler to enable horizontal scaling without duplicate processing. 【F:app/commands/listener.py†L64-L92】 | ||
| 3. Embrace timeouts in the request/reply helper to guard against hung handlers. 【F:app/flows/mission_start_celery/tasks.py†L22-L30】 | ||
|
|
||
| **Insight:** Redis Streams give durable back-pressure and replay semantics, which simplified our recovery strategy compared to transient queues. | ||
|
|
||
| ## 3. Standardize Handler Telemetry | ||
|
|
||
| 1. Wrap every handler with `multi_stage_reply` to emit start, progress, completed, and failed events. 【F:app/redis_utils/decorators.py†L9-L58】 | ||
| 2. Pass the reply stream name through command payloads so handlers can push status updates to the correct channel. 【F:app/flows/mission_start_celery/tasks.py†L20-L72】 | ||
| 3. Surface fractional progress to unlock richer mission dashboards and automated retries. | ||
|
|
||
| **Insight:** A uniform decorator drastically reduced boilerplate and made monitoring symmetrical across services. | ||
|
|
||
| ## 4. Keep the Listener Lightweight and Idempotent | ||
|
|
||
| 1. Discover handler modules dynamically to eliminate manual registration drift. 【F:app/commands/listener.py†L18-L39】 | ||
| 2. Create consumer groups on startup but tolerate BUSYGROUP errors so restarts stay idempotent. 【F:app/commands/listener.py†L70-L87】 | ||
| 3. Acknowledge messages only after handlers succeed; log failures to aid replay. | ||
|
|
||
| **Insight:** The listener forms the boundary between Celery orchestration and async services. Keeping it stateless lets us scale more listeners when mission load grows. | ||
|
|
||
| ## 5. Provide Alternate Execution Paths for Testing | ||
|
|
||
| 1. Mirror the Celery saga with a pure-async orchestrator to run deterministic tests without workers. 【F:app/flows/mission_start_async/orchestrator.py†L1-L96】 | ||
| 2. Reuse the same `request_and_reply` contract so both backends exercise identical handlers. 【F:app/flows/mission_start_async/orchestrator.py†L40-L80】 | ||
| 3. Trigger the desired backend through the `mission:start` handler's `backend` parameter for scenario coverage. 【F:app/commands/handlers/start_mission.py†L24-L47】 | ||
|
|
||
| **Insight:** Offering a Celery and pure-async path de-risks orchestration changes by enabling test suites that avoid worker scheduling variability. | ||
|
|
||
| ## 6. Containerize the Runtime Early | ||
|
|
||
| 1. Compose Redis, PostgreSQL, Celery workers, Flower, and the listener in Docker Compose to codify infrastructure. 【F:docker-compose.yml†L1-L74】 | ||
| 2. Gate service startup on health checks to guarantee Redis is ready before Celery workers boot. 【F:docker-compose.yml†L23-L34】 | ||
| 3. Mount the application directory for rapid inner-loop iteration while retaining container parity. | ||
|
|
||
| **Insight:** The Compose stack doubles as both development and integration-test environment, ensuring parity and shortening feedback loops. | ||
|
|
||
| ## 7. Make Observability a First-Class Concern | ||
|
|
||
| 1. Enable Celery event tracking (`-E`) so Flower captures task lifecycle events. 【F:docker-compose.yml†L23-L34】 | ||
| 2. Emit structured telemetry via Redis replies for mission dashboards and audit trails. 【F:app/redis_utils/decorators.py†L24-L55】 | ||
| 3. Log correlation IDs at every layer to map saga progress across systems. 【F:app/flows/mission_start_celery/orchestrator.py†L23-L31】 | ||
|
|
||
| **Insight:** Observability requirements shape the contract between orchestrator, tasks, and handlers; designing telemetry up front prevents opaque failure modes. | ||
|
|
||
| ## 8. Testing Checklist | ||
|
|
||
| - Run the async orchestrator in isolation to validate handler logic deterministically. 【F:app/flows/mission_start_async/orchestrator.py†L1-L96】 | ||
| - Execute Celery-based sagas inside Docker Compose and inspect Flower for task flow regressions. 【F:docker-compose.yml†L23-L34】 | ||
| - Add integration tests that push commands onto Redis Streams and assert replies to cover the end-to-end contract. 【F:tests/integration/test_orchestrator_trigger.py†L1-L88】 |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reference format '【F:app/flows/mission_start_celery/orchestrator.py†L32-L50】' appears to be a non-standard documentation syntax. Consider using standard markdown links or code references for better readability and tool compatibility.
| 2. Build a chain with explicit `link_error` callbacks so compensations fire automatically when downstream tasks fail. 【F:app/flows/mission_start_celery/orchestrator.py†L32-L50】 | |
| 3. Generate saga-scoped correlation IDs to trace every command and reply. 【F:app/flows/mission_start_celery/orchestrator.py†L23-L31】 | |
| **Insight:** Treat compensations as first-class Celery tasks. This keeps orchestration declarative and makes failure handling observable through Flower. | |
| ## 2. Decouple Work through Redis Streams | |
| 1. Publish commands with `request_and_reply` so Celery tasks and async handlers communicate via Redis Streams. 【F:app/flows/mission_start_celery/tasks.py†L18-L78】 | |
| 2. Use consumer groups per handler to enable horizontal scaling without duplicate processing. 【F:app/commands/listener.py†L64-L92】 | |
| 3. Embrace timeouts in the request/reply helper to guard against hung handlers. 【F:app/flows/mission_start_celery/tasks.py†L22-L30】 | |
| **Insight:** Redis Streams give durable back-pressure and replay semantics, which simplified our recovery strategy compared to transient queues. | |
| ## 3. Standardize Handler Telemetry | |
| 1. Wrap every handler with `multi_stage_reply` to emit start, progress, completed, and failed events. 【F:app/redis_utils/decorators.py†L9-L58】 | |
| 2. Pass the reply stream name through command payloads so handlers can push status updates to the correct channel. 【F:app/flows/mission_start_celery/tasks.py†L20-L72】 | |
| 3. Surface fractional progress to unlock richer mission dashboards and automated retries. | |
| **Insight:** A uniform decorator drastically reduced boilerplate and made monitoring symmetrical across services. | |
| ## 4. Keep the Listener Lightweight and Idempotent | |
| 1. Discover handler modules dynamically to eliminate manual registration drift. 【F:app/commands/listener.py†L18-L39】 | |
| 2. Create consumer groups on startup but tolerate BUSYGROUP errors so restarts stay idempotent. 【F:app/commands/listener.py†L70-L87】 | |
| 3. Acknowledge messages only after handlers succeed; log failures to aid replay. | |
| **Insight:** The listener forms the boundary between Celery orchestration and async services. Keeping it stateless lets us scale more listeners when mission load grows. | |
| ## 5. Provide Alternate Execution Paths for Testing | |
| 1. Mirror the Celery saga with a pure-async orchestrator to run deterministic tests without workers. 【F:app/flows/mission_start_async/orchestrator.py†L1-L96】 | |
| 2. Reuse the same `request_and_reply` contract so both backends exercise identical handlers. 【F:app/flows/mission_start_async/orchestrator.py†L40-L80】 | |
| 3. Trigger the desired backend through the `mission:start` handler's `backend` parameter for scenario coverage. 【F:app/commands/handlers/start_mission.py†L24-L47】 | |
| **Insight:** Offering a Celery and pure-async path de-risks orchestration changes by enabling test suites that avoid worker scheduling variability. | |
| ## 6. Containerize the Runtime Early | |
| 1. Compose Redis, PostgreSQL, Celery workers, Flower, and the listener in Docker Compose to codify infrastructure. 【F:docker-compose.yml†L1-L74】 | |
| 2. Gate service startup on health checks to guarantee Redis is ready before Celery workers boot. 【F:docker-compose.yml†L23-L34】 | |
| 3. Mount the application directory for rapid inner-loop iteration while retaining container parity. | |
| **Insight:** The Compose stack doubles as both development and integration-test environment, ensuring parity and shortening feedback loops. | |
| ## 7. Make Observability a First-Class Concern | |
| 1. Enable Celery event tracking (`-E`) so Flower captures task lifecycle events. 【F:docker-compose.yml†L23-L34】 | |
| 2. Emit structured telemetry via Redis replies for mission dashboards and audit trails. 【F:app/redis_utils/decorators.py†L24-L55】 | |
| 3. Log correlation IDs at every layer to map saga progress across systems. 【F:app/flows/mission_start_celery/orchestrator.py†L23-L31】 | |
| **Insight:** Observability requirements shape the contract between orchestrator, tasks, and handlers; designing telemetry up front prevents opaque failure modes. | |
| ## 8. Testing Checklist | |
| - Run the async orchestrator in isolation to validate handler logic deterministically. 【F:app/flows/mission_start_async/orchestrator.py†L1-L96】 | |
| - Execute Celery-based sagas inside Docker Compose and inspect Flower for task flow regressions. 【F:docker-compose.yml†L23-L34】 | |
| - Add integration tests that push commands onto Redis Streams and assert replies to cover the end-to-end contract. 【F:tests/integration/test_orchestrator_trigger.py†L1-L88】 | |
| 2. Build a chain with explicit `link_error` callbacks so compensations fire automatically when downstream tasks fail. [app/flows/mission_start_celery/orchestrator.py:L32-L50](app/flows/mission_start_celery/orchestrator.py#L32-L50) | |
| 3. Generate saga-scoped correlation IDs to trace every command and reply. [app/flows/mission_start_celery/orchestrator.py:L23-L31](app/flows/mission_start_celery/orchestrator.py#L23-L31) | |
| **Insight:** Treat compensations as first-class Celery tasks. This keeps orchestration declarative and makes failure handling observable through Flower. | |
| ## 2. Decouple Work through Redis Streams | |
| 1. Publish commands with `request_and_reply` so Celery tasks and async handlers communicate via Redis Streams. [app/flows/mission_start_celery/tasks.py:L18-L78](app/flows/mission_start_celery/tasks.py#L18-L78) | |
| 2. Use consumer groups per handler to enable horizontal scaling without duplicate processing. [app/commands/listener.py:L64-L92](app/commands/listener.py#L64-L92) | |
| 3. Embrace timeouts in the request/reply helper to guard against hung handlers. [app/flows/mission_start_celery/tasks.py:L22-L30](app/flows/mission_start_celery/tasks.py#L22-L30) | |
| **Insight:** Redis Streams give durable back-pressure and replay semantics, which simplified our recovery strategy compared to transient queues. | |
| ## 3. Standardize Handler Telemetry | |
| 1. Wrap every handler with `multi_stage_reply` to emit start, progress, completed, and failed events. [app/redis_utils/decorators.py:L9-L58](app/redis_utils/decorators.py#L9-L58) | |
| 2. Pass the reply stream name through command payloads so handlers can push status updates to the correct channel. [app/flows/mission_start_celery/tasks.py:L20-L72](app/flows/mission_start_celery/tasks.py#L20-L72) | |
| 3. Surface fractional progress to unlock richer mission dashboards and automated retries. | |
| **Insight:** A uniform decorator drastically reduced boilerplate and made monitoring symmetrical across services. | |
| ## 4. Keep the Listener Lightweight and Idempotent | |
| 1. Discover handler modules dynamically to eliminate manual registration drift. [app/commands/listener.py:L18-L39](app/commands/listener.py#L18-L39) | |
| 2. Create consumer groups on startup but tolerate BUSYGROUP errors so restarts stay idempotent. [app/commands/listener.py:L70-L87](app/commands/listener.py#L70-L87) | |
| 3. Acknowledge messages only after handlers succeed; log failures to aid replay. | |
| **Insight:** The listener forms the boundary between Celery orchestration and async services. Keeping it stateless lets us scale more listeners when mission load grows. | |
| ## 5. Provide Alternate Execution Paths for Testing | |
| 1. Mirror the Celery saga with a pure-async orchestrator to run deterministic tests without workers. [app/flows/mission_start_async/orchestrator.py:L1-L96](app/flows/mission_start_async/orchestrator.py#L1-L96) | |
| 2. Reuse the same `request_and_reply` contract so both backends exercise identical handlers. [app/flows/mission_start_async/orchestrator.py:L40-L80](app/flows/mission_start_async/orchestrator.py#L40-L80) | |
| 3. Trigger the desired backend through the `mission:start` handler's `backend` parameter for scenario coverage. [app/commands/handlers/start_mission.py:L24-L47](app/commands/handlers/start_mission.py#L24-L47) | |
| **Insight:** Offering a Celery and pure-async path de-risks orchestration changes by enabling test suites that avoid worker scheduling variability. | |
| ## 6. Containerize the Runtime Early | |
| 1. Compose Redis, PostgreSQL, Celery workers, Flower, and the listener in Docker Compose to codify infrastructure. [docker-compose.yml:L1-L74](docker-compose.yml#L1-L74) | |
| 2. Gate service startup on health checks to guarantee Redis is ready before Celery workers boot. [docker-compose.yml:L23-L34](docker-compose.yml#L23-L34) | |
| 3. Mount the application directory for rapid inner-loop iteration while retaining container parity. | |
| **Insight:** The Compose stack doubles as both development and integration-test environment, ensuring parity and shortening feedback loops. | |
| ## 7. Make Observability a First-Class Concern | |
| 1. Enable Celery event tracking (`-E`) so Flower captures task lifecycle events. [docker-compose.yml:L23-L34](docker-compose.yml#L23-L34) | |
| 2. Emit structured telemetry via Redis replies for mission dashboards and audit trails. [app/redis_utils/decorators.py:L24-L55](app/redis_utils/decorators.py#L24-L55) | |
| 3. Log correlation IDs at every layer to map saga progress across systems. [app/flows/mission_start_celery/orchestrator.py:L23-L31](app/flows/mission_start_celery/orchestrator.py#L23-L31) | |
| **Insight:** Observability requirements shape the contract between orchestrator, tasks, and handlers; designing telemetry up front prevents opaque failure modes. | |
| ## 8. Testing Checklist | |
| - Run the async orchestrator in isolation to validate handler logic deterministically. [app/flows/mission_start_async/orchestrator.py:L1-L96](app/flows/mission_start_async/orchestrator.py#L1-L96) | |
| - Execute Celery-based sagas inside Docker Compose and inspect Flower for task flow regressions. [docker-compose.yml:L23-L34](docker-compose.yml#L23-L34) | |
| - Add integration tests that push commands onto Redis Streams and assert replies to cover the end-to-end contract. [tests/integration/test_orchestrator_trigger.py:L1-L88](tests/integration/test_orchestrator_trigger.py#L1-L88) |
|
|
||
| ### Key Responsibilities | ||
|
|
||
| - **Saga Orchestrator** – Builds the Celery canvas that sequences mission tasks and compensations. 【F:app/flows/mission_start_celery/orchestrator.py†L21-L56】 |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reference format '【F:app/flows/mission_start_celery/orchestrator.py†L21-L56】' uses non-standard documentation syntax. Consider using standard markdown links or code references for consistency and better tool support.
| - **Saga Orchestrator** – Builds the Celery canvas that sequences mission tasks and compensations. 【F:app/flows/mission_start_celery/orchestrator.py†L21-L56】 | |
| - **Saga Orchestrator** – Builds the Celery canvas that sequences mission tasks and compensations. [app/flows/mission_start_celery/orchestrator.py (lines 21–56)](../app/flows/mission_start_celery/orchestrator.py#L21-L56) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please do these fixes for all references provided
| A proof-of-concept that demonstrates how Celery, Redis Streams, and async command handlers can implement the Saga pattern for collaborative robot exploration. The project coordinates mission phases—resource allocation, route planning, exploration, and map integration—while providing compensating actions and rich telemetry. | ||
|
|
||
| ## Key Features | ||
| - **Saga orchestration:** The Celery flow builds a canvas of mission tasks with compensations using `link_error`. 【F:app/flows/mission_start_celery/orchestrator.py†L27-L50】 |
Copilot
AI
Sep 25, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The reference format '【F:app/flows/mission_start_celery/orchestrator.py†L27-L50】' uses non-standard documentation syntax. Consider using standard markdown links or code references for better readability.
| - **Saga orchestration:** The Celery flow builds a canvas of mission tasks with compensations using `link_error`. 【F:app/flows/mission_start_celery/orchestrator.py†L27-L50】 | |
| - **Saga orchestration:** The Celery flow builds a canvas of mission tasks with compensations using `link_error`. [app/flows/mission_start_celery/orchestrator.py#L27-L50](app/flows/mission_start_celery/orchestrator.py#L27-L50) |
| @@ -0,0 +1,129 @@ | |||
| # High-Level Architecture | |||
|
|
|||
| This document summarizes the architecture of the Collaborative SLAM Exploration proof-of-concept. The PoC demonstrates how a Celery-driven saga orchestrates Redis stream based command handlers to coordinate multi-robot exploration workflows. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, it isn't Celery driven saga architecture, we have option based on async-await python flow. Two backend options.
also it worth to mention that we use Redis for request/reply pattern.
So basically Celery has the list important role here but more important are 2 patterns: Saga and Request/reply, and for Saga we have 2 options: Celery and pure async-await python. and for Request/reply we have Redis backend.
|
|
||
| ## System Context | ||
|
|
||
| The solution is triggered when an external operator emits a `mission:start` command. The asynchronous command listener routes the event to the saga orchestrator, which dispatches Celery tasks and compensations through Redis. Handlers emit multi-stage progress updates so that mission control can observe state through Flower and the Redis replies stream. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd start from request/reply first -- it happen that one of the handlers have Saga pattern, but it isn't key moment.
Please focus on patterns, and note that we have 2 backends for Saga and one for request/reply.
And Flower isn't as much important we just happen to have it. But with pure python implementation we might use different approach like OTEL to track events.
|
|
||
| The PoC is organized into well-defined layers: orchestration flows, task definitions, command handlers, and Redis utilities. The following component diagram highlights the internal structure. | ||
|
|
||
| ```mermaid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, please focus on even driven patterns in this schema.
We don't need add much details about backend -- you can mention that we have 2 options for backend (async/await python, Celery) and Redis consider as tool to stream events. And request_and_reply + multi_stage_reply implement Request/reply pattern for requester, and receiver side.
|
|
||
| ### Interaction Notes | ||
|
|
||
| 1. The orchestrator chains mission steps with compensations using Celery's canvas primitives. 【F:app/flows/mission_start_celery/orchestrator.py†L27-L50】 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess we can have just a separate section in the document which explain details of Celery implementation without spreading these details across a document
|
|
||
| The diagram below shows how a successful mission start flows through the system. Failure paths trigger compensations by linking Celery error callbacks; this behavior mirrors the same message exchanges with compensation tasks. | ||
|
|
||
| ```mermaid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need 2 diagrams:
- Request/Reply pattern -- just give description of one command, all commands supposed to be the same: start, progress, completed, failed.
- Saga (ignore Celery implementation focus on idea -- which looks simpler in async/await python implementation) -- here you can show all commands (but without providing details on Request/Reply pattern (just in general)
and remember that we happen to call saga pattern in one of the Request/Reply handlers ("start_mission"), but we could have similar saga for other handlers laster as well -- they just point of splitting one command handling in a series of asynchronous commands.
|
|
||
| **Insight:** Observability requirements shape the contract between orchestrator, tasks, and handlers; designing telemetry up front prevents opaque failure modes. | ||
|
|
||
| ## 8. Testing Checklist |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing is the strong side of this app. And I'd highlight it in details.
We have good coverage in unit tests, almost all functionality is covered there.
And not least important integration tests -- we have them as well.
So please stress on it and add more specific details how it was done
| # Collaborative SLAM Exploration for Robots | ||
|
|
||
| A proof-of-concept implementation of the Saga pattern for orchestrating multi-step, distributed workflows using Celery, Redis, PostgreSQL, and Flower. The scenario simulates collaborative SLAM (Simultaneous Localization and Mapping) exploration by multiple robots, with robust rollback (compensation) logic for failures. | ||
| A proof-of-concept that demonstrates how Celery, Redis Streams, and async command handlers can implement the Saga pattern for collaborative robot exploration. The project coordinates mission phases—resource allocation, route planning, exploration, and map integration—while providing compensating actions and rich telemetry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need to fix it -- main point is Even driven development patterns Request/Reply and Saga. And we use them to split complex mission into async stages.
| A proof-of-concept that demonstrates how Celery, Redis Streams, and async command handlers can implement the Saga pattern for collaborative robot exploration. The project coordinates mission phases—resource allocation, route planning, exploration, and map integration—while providing compensating actions and rich telemetry. | ||
|
|
||
| ## Key Features | ||
| - **Saga orchestration:** The Celery flow builds a canvas of mission tasks with compensations using `link_error`. 【F:app/flows/mission_start_celery/orchestrator.py†L27-L50】 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Celery/Flower aren't key features -- we have 2 backends which brings their own tools but key features are implementing with 2 different backends
| - **Integration tests:** Interact with Redis Streams to validate end-to-end orchestration. 【F:tests/integration/test_orchestrator_trigger.py†L1-L88】 | ||
|
|
||
| 1. Using the helper script: | ||
| Run the suites locally using the development compose file: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we have "/scripts/unit-tests.sh" script to run unit tests
|
|
||
| This project uses [Ruff](https://docs.astral.sh/ruff/) for linting and formatting, managed via pre-commit hooks and containerized workflows. | ||
|
|
||
| ### Linting and Unit Tests (Containerized) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we still need Linting instruction
Summary
Testing
ref: #1
https://chatgpt.com/codex/tasks/task_e_68d4539323008323b319a4cc4164812b