Writing computer code by voice

Talk by Michael Arntzenius (he/him) 👩‍👩‍👧

Saturday from 11:10 AM - 11:40 AM in Stage C

What does a computer programmer do if they can’t type code? Speak it, of course! Like many programmers and heavy computer users, I suffer from chronic hand pain aggravated by computer keyboard use (often called repetitive strain injury, or RSI). After having mild symptoms for years, in 2019 my hand pain worsened from irritating to unignorable. Eventually I got desperate enough to try to learn to code by voice. To my surprise, it worked. Speech recognition technology has advanced dramatically in the past decade, but most people’s ideas about voice-driven UI are based on highly limited voice assistants like Siri or Alexa, powerful but imprecise tools like LLMs, or special-purpose dictation tools. What does a voice interface for precise control by expert users look like? A small but growing community of programmers, many affected by RSI, have been developing tools that explore this question. I use one such tool, called Talon, to control my computer, write and edit computer code, and even to write my dissertation. In this talk, I will live-demo how I edit code by voice, as well as tackling the following questions: 1. What is voice-coding like? What unique challenges does it present? 2. How do voice control systems like Talon work under the hood? 3. Could voice coding ever be *better* than typing? 4. What can we learn from all this about designing voice interfaces for expert users?

If you would like to mark this as a favourite please log in.


Return to: