Goal: "Better word processing on the go experience”
Background: HCI research in affiliation with NUS-HCI Lab.
Worked as a research intern, the project is about developing an effective way of doing word processing eyes free and hands free. The project runs from initial literature review to final paper crafting, and is currently submitted to CHI 2018 for reviewing.
Problem: Word processing while walking, not easy.
Paul, a graduate student, wants to revise his meeting draft while walking down to his supervisor. Paul find it difficult to type on his phone while walking. He wonders if there is a way to do word processing without having his eyes and hands occupied by the phone so he can know where he is going.
Field Studies
TECHNOLOGY PROBE: Patterns of Speech-Input for Eyes-Free Word Processing
We conducted an interview study in a Wizard of Oz style to explore user scenarios for eyes-free word processing on mobile devices. This was followed by a technology probe deployment to elicit and understand user requirements in such scenarios.
STUDY: Limitations of Using Existing Dictation applications for Eyes-Free Scenarios.
After eliciting user requirements, we next evaluated how existing applications could meet these requirements. The product being compared here is "Dragon Professional Individual for Mac 6"
Implementation
Four core functions
1. Speech-to-text and Text-to-speech Integration.
2. Sentence-Level Operation.
3. Concise Command Structure & System Feedback.
4. Track Changes.
Demo of Use
Research Contribution
EDITalk is designed to bridge the gap between the user’s need for eyes-free interaction with text in a mobile scenario and limitations of using existing dictation systems to perform those operations without visual feedback. Results show that EDITalk enables the user to achieve eyes-free word processing with high accuracy and precision levels. The prototype garnered positive feedback from all of our participants and demonstrated promising potential for further research to improve upon the usability and intuitiveness of the system. We hope that our work can lead to new ways of speech-based interaction with text. This would be another step towards an exciting future in the conversational paradigm of user interactions.