$ cat speakeasy.json
title: SpeakEasy
category: research
client: Master of Design in Experience Design, SJSU
date: Oct 14, 2024
stack:Voice-Driven AIXR AccessibilityInclusive Design





Project Overview
SpeakEasy is an immersive research and design project that reimagines Extended Reality (XR) for a more inclusive future. By integrating a voice-driven AI interface into XR environments, SpeakEasy addresses the complex interaction barriers faced by users with low muscle tone and other physical challenges. This project is dedicated to making XR more accessible, intuitive, and engaging.
Situation
• Overview
Extended Reality holds transformative promise but often remains a domain accessible only to the tech-savvy. Current XR systems typically require precise physical inputs and complex navigation, thereby excluding users with disabilities and limited motor control.
• Challenge
Users with low muscle tone struggle with conventional XR interfaces that depend on manual interaction and intricate gesture controls.
• Context
Despite rapid advancements in XR hardware and software, many experiences lack the simplicity and naturalness required for universal accessibility.
Task
• Overview
The core objective was to design an XR experience that minimizes physical strain while maximizing intuitive interaction. I set out to develop a voice-driven AI system—SpeakEasy—that would:
• Reduce physical interaction
by shifting control to natural language commands.
• Personalize the user experience
using adaptive AI to respond to individual needs.
• Facilitate inclusive design
that benefits users across a wide spectrum of abilities.
Action
1. Research & Literature Review
I conducted a comprehensive review of XR accessibility challenges, integrating insights from academic sources, industry trends, and competitive analyses.
2. Co-Design Workshops
Engaging with target users and experts, I organized iterative workshops that enabled real-time feedback and guided the evolution of the interface design.
3. Prototype Development
Beginning with early sketches and wireframes in ShapesXR, I rapidly moved through iterative cycles—transitioning to a Unity3D prototype—to incorporate voice recognition and multimodal feedback mechanisms.
4. Feedback & Iteration
Continuous testing, supported by participant recruitment and structured field protocols, allowed for refinements in interaction logic and overall system adaptability.
Result
• Overview
The iterative process has yielded a promising framework that demonstrates:
• Enhanced Accessibility
Early user tests indicate a marked reduction in physical interaction requirements, allowing individuals with low muscle tone to navigate XR environments more comfortably.
• Improved User Engagement
Participants reported a more natural and engaging experience, with voice commands enabling quicker, more intuitive interactions compared to traditional input methods.
• A Scalable Framework
The success of SpeakEasy's initial iterations lays the groundwork for broader applications in XR, supporting further research into complementary modalities like gesture recognition.
Exhibition & Future Directions
• Interactive Stations
Live demonstrations of the prototype where visitors can experience the voice-driven interface firsthand.
• User Testimonials
Video recordings and written feedback from participants will highlight the impact of the design on accessibility and usability.
• Future Concepts
Insights into planned enhancements, including advanced gesture recognition and deeper personalization features.