Human-AI Interaction
Accessibility
Tools for Thoughts

I create AI-powered interactive systems that solve real world problems.
Most of my work is related to accessibility, ensuring AI empowers everyone.
My recent work at Google Deepmind focuses on tools for thoughts, building AI that aids human cognition.


Selected Projects Full publications are available in CV
Bespoke Interfaces
Imagine LLMs crafting stunning user interfaces tailored to your exact needs. Our team is pioneering this new frontier by teaching LLMs to understand your intent, design intuitive structures, and build beautiful Flutter user interfaces. This breakthrough will reshape how we interact with digital information. For example, the model can generate interfaces for people with different abilities.
Inkeraction: An Interaction Modality Powered by Ink Recognition and Synthesis
We introduce a new interaction modality that understands and manipulates ink objects like handwriting and sketches. By recognizing patterns and relationships, Inkeraction enables features such as smart editing, proofreading, and automated transcription. It also opens up creative possibilities by allowing users to generate new ink content based on existing strokes.
LaMPost: LLM-Powered Writing Assistant for Adults with Dyslexia
People with dyslexia often face significant challenges with writing. While traditional tools have helped, the advent of LLMs offers new possibilities. We developed LaMPost, an AI-powered email writing assistant designed specifically for dyslexic users, which has features like outlining, subject line generation, and text rewriting.
Action Blocks: Mobile Shortcuts for People with Cognitive Disabilities
Mobile devices are essential tools in our modern world, yet they can be challenging for people with cognitive disabilities. To address this, we created Action Blocks, an Android app that provides one-touch access to frequently used functions. Users can customize Action Blocks with personalized commands and images, making it easy to use. Action Blocks has launched in 2020.
Molder: Enabling People with Visual Impairments to Design Maps
Tactile maps are essential for people with visual impairments, but creating them is challenging due to the specialized skills required. To empower more people to design these vital tools, we developed Molder, a design tool that allows users to design tactile maps using using tangible input techniques, auditory feedback, and high-contrast visual feedback.
Designing Interactive 3D Printed Models with Teachers of the Visually Impaired
We collaborated with Teachers of the Visually Impaired (TVIs) to explore the design guidelines for interactive 3D models. By conducting workshops and iterative design processes, we identified that successful interactive 3D models should (1) have effective tactile features, (2) contain both auditory and visual content, and (3) consider pedagogical methods (e.g., overview before details).
Accessible Video Calling: Enabling Nonvisual Perception of Visual Conversation Cues
We developed a system to empower people with visual impairments to fully participate in video conversations. The system can detect visual cues like Attention, Agreement, Disagreement, Happiness, Thinking, and Surprise, and translates visual cues into intuitive audio signals. We designed the audio signals by partnering with people who are blind or low-vision. To evaluate the system, we conducted a user study with 16 participants.
Sensing the Knocking with Daily Objects
Knocking is a way of interacting with everyday objects. We developed an algorithm to identify the sound signature of knocking on an object. A user may associate different actions with objects. For example, they can set an order of coffee bean when knocking on a coffee machine. Our user studies with 12 participants showed that our smartwatch-based implementation could accurately classify eight different objects using a user-independent classifier.
Tickers and talker: Making 3D Printed Models Interactive
We present a labeling toolkit that enables users to add and access audio labels to 3D printed models. The toolkit includes Tickers, small 3D printed percussion instruments added to 3D models, and Talker, a signal processing application that detects and classifies Ticker sounds. We evaluated Tickers and Talker with three models in a study with nine blind participants. Our toolkit achieved an accuracy of 93% across all participants and models.