SysSage is a privacy-first, on-device assistant for Windows that uses local LLM adapters and OS tooling to answer questions about your system, run diagnostics, automate workflows, and assist with file/process/hardware queries — all while keeping data on your machine.
- Why SysSage
- Proposed solution
- Key features
- Quickstart (Windows / PowerShell)
- Project layout
- Architecture (high level)
- Security & Privacy
- Extending SysSage
- Development notes
- Troubleshooting & common notes
- Contributing & roadmap
- License
- Contact
- System troubleshooting, inventory, and personal assistance often require multiple tools, admin rights, or cloud services that expose sensitive data.
- Developers, power users, and administrators need fast, contextual, and private help about processes, running services, hardware, and files without sending data to the cloud.
SysSage provides a modular agent framework and a Streamlit UI that connects a suite of OS-level tools (process, file, hardware, browser artifacts) with local LLMs (via llm/ollama_wrapper.py or other adapters). It focuses on local-first intelligence, scripted automation, and extensibility.
- Privacy-first local LLM integration — use Ollama (or swap in another local/secure model) so prompts and context can stay on-device.
- System inspection tools — process, hardware, file, and browser helpers under
tools/(e.g.,psutil,pywin32). - Agent-based automation — small agents in
agents/can monitor, execute, and report tasks (executor, monitor). - Crew runner / Orchestration —
crew_runner.pycoordinates multiple agents for multi-step workflows. - Simple Streamlit UI —
ui/streamlit_ui.pyplus an intent dispatcher (ui/intent_dispatcher.py) for quick interactions and saved histories. - Scriptable experiments —
assistant.py,simple_approach.pyprovide examples and starting points for new features.
- Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1- Install dependencies
pip install -r requirements.txt- (Optional) Configure local LLMs
- If you plan to use Ollama or another local LLM, install and configure it separately. See
llm/ollama_wrapper.pyfor integration points.
- Run the Streamlit UI
streamlit run .\ui\streamlit_ui.py- Run the agent crew example
python .\crew_runner.pyagents/— agent implementations (e.g.,executor_agent.py,monitor_agent.py).llm/— local LLM adapter(s) and wrappers.tasks/— task orchestration helpers.tools/— OS and utility tooling (process, file, hardware, browsing helpers).ui/— Streamlit UI, intent dispatcher, and front-end wiring.crew_runner.py— example orchestrator that launches agent crews.assistant.py,simple_approach.py— experimental entrypoints and scripts.requirements.txt— pinned Python dependencies.
-
UI (Streamlit)
- Collects intents and displays agent outputs. Sends structured requests to the intent dispatcher.
-
Intent Dispatcher (
ui/intent_dispatcher.py)- Maps UI intents to agent/task workflows and routes them to the crew runner or direct tools.
-
Agents & Crew Runner
- Agents encapsulate capabilities (monitoring, execution, file analysis).
crew_runner.pycomposes agents to run multi-step flows.
- Agents encapsulate capabilities (monitoring, execution, file analysis).
-
Tools
- Deterministic helpers that perform system actions or data retrieval (psutil wrappers, process scanning, file search, browser artifact parsing).
-
LLM Adapter
llm/ollama_wrapper.pyshows how to place contextual prompts and combine deterministic outputs with model reasoning while retaining privacy.
- Local-first operation: LLM prompts and context can stay local when using on-device models.
- Minimal telemetry: default behavior is to avoid external logging of sensitive outputs; add explicit opt-in if remote logging is required.
- Elevated actions: some tools require admin rights — the UI will surface warnings and request appropriate permissions.
- Add a new tool: create a helper in
tools/, write a small wrapper, and register it with an agent. - Add a new agent: follow the pattern in
agents/and expose a simple interface (start/stop/handle_intent). - Swap LLM provider: implement a new adapter under
llm/that matches the wrapper interface.
- Recommended Python: 3.10+.
- Tests: add unit tests for tool logic (avoid calling LLMs in unit tests; mock adapters).
- CI: small lint and test step (GitHub Actions recommended).
- Streamlit launching issues: ensure your venv is active and
streamlitis installed. - Missing OS-level permissions: run PowerShell as Administrator for tasks that require system-level access.
- LLM adapter errors: confirm the local model server is running and the wrapper configuration matches your installation.
- Open issues to suggest features or report bugs.
- Short-term roadmap: improve agent orchestration, add richer browser artifact parsing, and include more robust offline prompt templates.
SysSage is a privacy-first, on-device assistant for Windows that uses local LLM adapters and OS tooling to answer questions about your system, run diagnostics, automate workflows, and assist with file/process/hardware queries — all while keeping data on your machine.
Why SysSage exists (Problem)
- System troubleshooting, inventory, and personal assistance often require multiple tools, admin rights, or cloud services that expose sensitive data.
- Developers, power users, and administrators need fast, contextual, and private help about processes, running services, hardware, and files without sending data to the cloud.
Proposed solution
- SysSage provides a modular agent framework and a Streamlit UI that connects a suite of OS-level tools (process, file, hardware, browser artifacts) with local LLMs (via
ollama_wrapper.pyor other adapters). It focuses on local-first intelligence, scripted automation, and extensibility.
Key features
- Privacy-first local LLM integration: use Ollama (or swap in another local/secure model) so prompts and context can stay on-device.
- System inspection tools: process, hardware, file, and browser helpers under
tools/(e.g.,psutil,pywin32usages). - Agent-based automation: small agents live in
agents/and can monitor, execute, and report tasks (executor, monitor). - Crew runner / Orchestration:
crew_runner.pycoordinates multiple agents for multi-step workflows. - Simple Streamlit UI:
ui/streamlit_ui.pyplus an intent dispatcher (ui/intent_dispatcher.py) for quick interactions and saved histories. - Scriptable experiments:
assistant.py,simple_approach.pyprovide examples and starting points for new features.
Short developer quickstart (Windows / PowerShell)
- Create and activate a virtual environment
python -m venv .venv
.\.venv\Scripts\Activate.ps1- Install dependencies
pip install -r requirements.txt- (Optional) Configure local LLMs
- If you plan to use Ollama or another local LLM, install and configure it separately. See
llm/ollama_wrapper.pyfor integration points.
- Run the Streamlit UI
streamlit run .\ui\streamlit_ui.py- Run the agent crew example
python .\crew_runner.pyProject layout (what's where)
agents/— agent implementations (e.g.,executor_agent.py,monitor_agent.py).llm/— local LLM adapter(s) and wrappers.tasks/— task orchestration helpers.tools/— OS and utility tooling (process, file, hardware, browsing helpers).ui/— Streamlit UI, intent dispatcher, and front-end wiring.crew_runner.py— example orchestrator that launches agent crews.assistant.py,simple_approach.py— experimental entrypoints and scripts.requirements.txt— pinned Python dependencies.
Architecture (high level)
-
UI (Streamlit)
- Collects intents and displays agent outputs. Sends structured requests to the intent dispatcher.
-
Intent Dispatcher (
ui/intent_dispatcher.py)- Maps UI intents to agent/task workflows and routes them to the crew runner or direct tools.
-
Agents & Crew Runner
- Agents encapsulate capabilities (monitoring, execution, file analysis).
crew_runner.pycomposes agents to run multi-step flows.
- Agents encapsulate capabilities (monitoring, execution, file analysis).
-
Tools
- Deterministic helpers that perform system actions or data retrieval (psutil wrappers, process scanning, file search, browser artifact parsing).
-
LLM Adapter
llm/ollama_wrapper.pyshows how to place contextual prompts and combine deterministic outputs with model reasoning while retaining privacy.
Security & Privacy
- Local-first operation: LLM prompts and context can stay local when using on-device models.
- Minimal telemetry: default behavior is to avoid external logging of sensitive outputs; add explicit opt-in if remote logging is required.
- Elevated actions: some tools require admin rights — the UI will surface warnings and request appropriate permissions.
Extending SysSage
- Add a new tool: create a helper in
tools/, write a small wrapper, and register it with an agent. - Add a new agent: follow the pattern in
agents/and expose a simple interface (start/stop/handle_intent). - Swap LLM provider: implement a new adapter under
llm/that matches the wrapper interface.
Development notes
- Recommended Python: 3.10+.
- Tests: add unit tests for tool logic (avoid calling LLMs in unit tests; mock adapters).
- CI: small lint and test step (GitHub Actions recommended).
Troubleshooting & common notes
- Streamlit launching issues: ensure your venv is active and
streamlitis installed. - Missing OS-level permissions: run PowerShell as Administrator for tasks that require system-level access.
- LLM adapter errors: confirm the local model server is running and the wrapper configuration matches your installation.
Contributing & roadmap
- Open issues to suggest features or report bugs.
- Short-term roadmap: improve agent orchestration, add richer browser artifact parsing, and include more robust offline prompt templates.
License
- Add a
LICENSEto the repository if you plan to open-source. MIT or Apache-2.0 are common choices.
Contact
- Use the repository issue tracker for collaboration and questions.
—
If you want this written in a slightly shorter pitch format, converted to a slide-friendly one-pager, or saved directly into the repo as README.md, tell me and I will update the file or create the alternate formats.