Frontend & ML Engineer focusing on interactive AI experiences, multimodal interfaces, and real-time ML inference in the browser.
I combine advanced frontend engineering with practical machine learning to build applications where AI feels instant, visual, and intuitive β from WebGPU-powered inference to multimodal creative tools.
- Browser ML Inference: WebGPU, WebAssembly, transformers.js
- AI-Powered Interfaces: Real-time UI, generative design tools
- Multimodal UX: Text, image, audio, camera input, drawing
- Frontend Architecture: Next.js apps with streaming, RSC, and WebSockets
Core:
- TypeScript β’ React β’ Next.js 14 (App Router)
- TailwindCSS β’ Framer Motion
- Zustand β’ TanStack Query
Graphics & Visualization:
- Canvas API
- WebGL β’ Three.js
- WebGPU (experimental)
Real-Time UX:
- WebSockets
- Server-Sent Events
- Streaming UI (React Server Components)Browser ML:
- transformers.js (LLMs & vision models)
- ONNX Runtime Web
- WebGPU / WASM inference
Python ML:
- PyTorch β’ Diffusers
- OpenCV β’ torchvision
- HuggingFace pipelines
AI Applications:
- Embeddings β’ Vector search
- RAG β’ LLM agents
- Multimodal processing (text, image, audio)Backend:
- Python β’ FastAPI
- Node.js β’ Express
Databases:
- PostgreSQL β’ Redis
- Vectordbs: pgvector, ChromaDB
DevOps:
- Docker β’ GitHub Actions
- Metrics & monitoring basics- Building client-side ML applications (WebGPU / transformers.js)
- Designing real-time multimodal UI
- Dynamic visual interfaces powered by embeddings/LLMs
- Browser-first inference, offline/edge ML workflows
- Next.js apps with responsive, high-performance UI
- Real-time interactions (WebSocket/SSE streaming)
- Canvas/WebGL for interactive graphics
- UI for ML tasks: visualization, prompting, data inspection
- FastAPI APIs for ML inference
- Python pipelines for preprocessing and embeddings
- Vector search for semantic applications
Technical Stack:
Frontend:
- Next.js 14 β’ TypeScript
- Canvas & Three.js for visual tools
- WebSocket-based streaming
- Framer Motion animations
Browser ML:
- transformers.js for on-device LLM/Vision
- WebGPU acceleration
- ONNX Runtime Web for fallback
Backend:
- FastAPI β’ PyTorch
- Custom accelerated diffusion pipeline
- Redis cache β’ PostgreSQL storageKey Features:
- β‘ Instant AI feedback with browser-based inference
- π¨ Drawing canvas + text + image β combined AI output
- π Real-time visual previews with progressive rendering
- π Hybrid engine: browser ML + Python API for heavy tasks
- π₯ Collaborative mode via WebSockets
- WebGPU acceleration for LLM inference
- Add on-device multimodal pipelines (image/text/audio)
- WASM runtime for diffusion-lite model
- Fully-streaming UI for AI apps (RSC)
- Advanced visualization tools (Canvas + WebGL)
- AI-driven UI personalization
- RAG chat with visual memory
- Smart prompting helpers inside UI
- Audio-to-visual creative mode
- CI/CD + automated preview builds
- Load testing for streaming features
- Vector database improvements


