DictationBridge: a very personal project

For me DictationBridge is a very personal project. I have been using computers since the early 80s, and that entire time I’ve had to use the keyboard to not only type, but control what the output from the computer was. I am totally blind and rely on speech output from my computer to read what is on the screen. This means I am unable to use a mouse to point and click at objects or do things such as highlight and edit text. I use a type of software called a screen reader.

After about 15 years of using computers typing on various keyboards, my hands started to feel the stress. After long days of arrowing up and down through pages and doing all kinds of hand gymnastics on the keyboard, my hands were sore and swollen. In the early 2000’s I desperately tried to use various dictation products to no avail. I bought several copies of IBM ViaVoice and Dragon NaturallySpeaking, hoping that the next version would play better with my screen reader than the past one had.

Eventually about 10 years ago a product did come out that allowed me to use Dragon NaturallySpeaking with JAWS for Windows, a very expensive screen reader. All of a sudden my world opened up; I was able to control my computer with my voice and type my documents. I am a technology specialist and have worked with people with a variety of disabilities throughout my entire career. I’ve always had to stay on the leading edge of what is happening in technology to keep ahead of my clients. As cloud-based applications became the norm, the product I was using for dictation was no longer able to work for me. I found that NVDA, an open source screen reader, worked more effectively with the applications I had to use. Unfortunately I was no longer able to dictate using this product. As I had to get my work done and stay on the leading edge of technology, I gave up the ability to dictate.

A few months ago a few of us started talking about how we could add the ability to dictate to NVDA, and DictationBridge was born. As we really wanted this product to work with NVDA, a free screenreader, we thought it was critical that the product itself be free to everyone. So we turned to crowdfunding. I want to ask you to help us assure that all blind users have the ability to dictate and control their computers by voice. In today’s schools, offices, homes, and retirement communities, computers are becoming the norm. Why shouldn’t blind and visually impaired people have the same ability to use these computers in whatever way they want?

Let me describe a little what difference DictationBridge will bring. Currently if you use a screen reader with either Microsoft speech recognition or Dragon NaturallySpeaking, the screen reader is unable to tell you what text the product is typing. This means that a blind person will need to manually go back and review all the text that was created and review it for errors and then fix it. This is problematic in so many different ways. First of all you may be dictating because you need to reduce the amount of typing you’re doing due to hand pain like myself. Also, both Microsoft speech recognition and Dragon NaturallySpeaking learn from the corrections you make to the recognized text. If you are going back later and manually fixing any recognition problems, the speech recognition will not learn and may actually get worse at recognizing what you’re saying.

One of the other critical features we will be adding to DictationBridge is the ability to know if the microphone is on or off. A sighted person can see the lovely little icon of the microphone in different positions and different colors indicating if speech recognition is live or not. The screen reader user has to exit the document they are currently in, go down and find where the icon is with a succession of keystrokes, breaking their train of thought in the document. Then once they’ve confirmed that the microphone is live or not they will have to find their place in the document again. DictationBridge will give them an easy way to determine if the microphone is listening or not. This way, when someone is creating a document and the phone rings, they can answer it and be sure that their phone conversation doesn’t get transcribed into their document. These features are only the beginning of what we want DictationBridge to do. With your funding support we can allow a screen reader user to interact with the dictated text and also give them the ability to give some of their traditional keyboard commands to control their screen reader by voice. When DictationBridge is released, it’ll be free to anyone. By using NVDA, Microsoft speech recognition, and DictationBridge, the only cost to the blind user will be the price of their computer. This means that many more blind people will be able to turn to dictation than ever before. Thank you for your contributions to the DictationBridge project.