← Back to Ships
mvp · active · 2026-04 → 2026-04

Parlay

macOS menu bar app for real-time language learning — speak, get instant AI feedback

SwiftSwiftUIAppKitWhisperKitOpenAImacOS

Press Cmd+Shift+L, speak, and Parlay figures out what you need: a translation, a correction of what you just said, or a pronunciation breakdown. The result floats over whatever app you’re in, speaks itself aloud, and disappears.

Modes

Parlay picks the mode from the utterance itself — no toggling.

  • Translation — say a sentence in your native language, get it in the target language with examples.
  • Correction — practice a sentence in the target language; Parlay flags mistakes and explains them in your native language.
  • Pronunciation — ask “how do I say X?” or just say one word; get IPA phonetics and a sound-by-sound breakdown.

Overlay + history

Responses render in a floating HUD on top of any app, with a play button for TTS and color-coded mode badges. Hidden from screen sharing by default. Every query is logged to a searchable history window grouped by date.

Stack

Swift + SwiftUI + AppKit for the menu bar and overlay chrome, WhisperKit for on-device speech-to-text, OpenAI for the language model calls, AVFoundation for TTS playback. Distributed as a DMG.

Learnings

  • Auto-detecting user intent (translate vs correct vs pronounce) from the utterance itself beats forcing a mode toggle — one hotkey does everything
  • A Dynamic Island-style floating HUD with a persistent header feels more native on macOS than a traditional popover attached to the menu bar
  • Hiding the overlay from screen sharing via NSWindow sharingType is a one-line fix that makes the app genuinely usable during meetings
  • Running WhisperKit on-device keeps audio off the cloud entirely — only the transcribed text ever hits the OpenAI API