Back to projects

Project case study

Qumra

Published Feb 8, 2026

Qumra cover

Implementation notes

Why I built it

I used Loom for async communication, but the premium plan did not match the value I needed for my workflow. I built Qumra as a full end-to-end alternative focused on recording, sharing, and editing video quickly.

Scope and ownership

I built the entire product myself:

  • Branding, landing page, product design, and implementation.
  • macOS recording desktop app in Electron.
  • Web dashboard and video editor in Next.js.
  • Backend and database in Convex.
  • Auth with Better Auth (email OTP and OAuth with Google/Apple).

Core product flows

  • macOS desktop download and sign-in.
  • Recording modes: screen, window, screen + camera, and selected screen region with aspect options.
  • Upload and distribution through Mux.
  • Video sharing and post-record editing in a web editor.

Hard technical problems and architecture decisions

1) System audio capture on macOS

Capturing system audio was one of the hardest parts. I built a Rust binary using ScreenCaptureKit and invoked it from Electron to integrate system-audio recording into the app workflow.

2) Recording performance and low FPS

I hit low-FPS issues in some recordings. To fix this, I split responsibilities:

  • Recording process focuses only on capture.
  • A separate invoker/worker pipeline handles chunking and uploads to Mux.

Decoupling those workloads removed the recording-performance issues I was seeing.

Video editor implementation

I built a focused editor designed for practical cleanup rather than full studio editing:

  • Timeline with audio/video operations (split, cut, mute ranges).
  • Transcript-synced navigation (click a word to jump to exact timing).
  • Selection-based editing from transcript or timeline ranges.
  • Keyboard shortcuts for fast editing.
  • Remotion player and Remotion Lambda rendering pipeline.
  • Save flow produces a newly rendered video plus updated transcript.

AI-assisted features

After transcript generation, I run AI workflows (invoked through Convex) for:

  • Title suggestions.
  • Captions.
  • Chapters.
  • Summary.
  • Command-based edits (for example removing filler words or muting selected phrases).

I stream AI edit results back into the UI as changes are applied. I also evaluate different models and track quality outcomes with PostHog.

Data and state behavior

  • Edit history is persisted through Convex with optimistic updates.
  • Temporary edit state is kept during active editing and cleared after final render.

Quality and testing approach

I introduced Playwright and Vitest tests for logic that is harder to validate through regular manual usage (for example low-frequency or environment-dependent flows), while keeping day-to-day iteration speed high.

What I learned

  • Product speed is not only about coding fast; it is about isolating heavy workloads so core user actions stay smooth.
  • In solo product work, a constrained editor with strong UX can beat a feature-heavy editor that is hard to maintain.
  • Sustainable pace matters. Building end-to-end taught me to optimize for long-term execution, not just short bursts.