AI-Powered Communication

Every voice deserves to be heard.

AACAI builds free, open-source AI tools for Augmentative & Alternative Communication. Not for profit. For the people who depend on it.

Explore Projects Learn More

Free, browser-based tools. No installs, no accounts.

Our Vision

Until every voice is fully heard, we're not done.

Millions rely on AAC to express themselves. Too often, the tools are slow, expensive, or impersonal. We believe AI can change that. AACAI builds free, working prototypes that prove it.

Working Prototypes

Real tools you can use today, not concepts. Each project ships as a free, browser-based application anyone can open.

Built in the Open

Every project is developed publicly and improved through feedback from AAC users, clinicians, and developers.

AI-First Design

Each project explores how AI can make communication faster, more natural, and more personal for AAC users.

Projects

What we're building.

Each project tackles a real challenge in AAC — shipping as a working tool, not a whitepaper.

Live Prototype

Eye Gaze AAC Board

A full communication board with vocabulary categories, spell mode, text-to-speech, and adaptive word prediction — all navigable by eye tracking with customizable dwell time and visual feedback.

Designed for people with ALS, cerebral palsy, locked-in syndrome, and other motor impairments. Also for SLPs, caregivers, and researchers evaluating AI-enhanced AAC.

Open Prototype

How It Works

From gaze to speech in five stages.

Each eye movement passes through a real-time pipeline — calibrated, smoothed, snapped, confirmed, and spoken.

01

Gaze Capture & Calibration

A 9-point calibration maps raw tracker coordinates to screen positions using least-squares regression.

02

Adaptive Smoothing

A velocity-adaptive filter smooths jitter when still but stays responsive during fast saccades.

03

Snap-to-Grid

The cursor magnetically snaps to the nearest target. Hysteresis prevents flickering between buttons.

04

Dwell Selection

Stability detection confirms intentional holds. Adaptive timing fires faster when gaze is perfectly still.

05

AI Prediction & Speech

Claude-powered word prediction suggests next words. TTS generates natural speech with local fallbacks.

Communication Modalities

Symbol-Based Systems Gesture Recognition Speech-Generating Devices Eye-Gaze Tracking Brain-Computer Interfaces Text-to-Speech Predictive Language Models

Get Involved

Stay in the loop.

Get updates on new projects, prototypes, and ways to contribute. No spam, ever.

Thanks for signing up. We'll be in touch.

We respect your privacy. Unsubscribe anytime.