DictationBridge Update

In late June, the DictationBridge crowdfunding campaign came to a successful conclusion having raised its entire $20,000 goal with only 8 hours to spare in the effort. In the time since, the DB team has been working diligently to write and document the software, get the technical support team trained and have been fixing bugs as soon as they are reported by the people testing the software.

This piece intends to describe the current state of DictationBridge, what we’ve accomplished thus far and what we still need to do.

NVDA and Windows Speech Recognition

As per the schedule we published when we launched the DB crowdfunding project, we did the work to get the combination of NVDA with Windows Speech Recognition (WSR) running first. We are happy to report that this task is feature complete and are happy to share the software with anyone who requests a copy. There may be some bugs in the software which we will fix if and when they are reported to us but, as of this writing, NVDA with WSR is considered complete.

We will be starting the work to support WSR in Window-Eyes and JAWS relatively soon.

NVDA With Dragon Products

At this stage in our development, we are approaching feature complete on the software bridging NVDA and the Dragon line of speech recognition products. Most of the Dragon UI has been scripted, echo back is working properly and a number of other features are now accessible but we probably have another week or two of effort to call this component complete.

Dragon Pro Scripts

In addition to scripting the Dragon user interface, the DB team is currently in process of creating scripts for Dragon Pro to permit users to issue screen reader commands by voice when using DictationBridge. We are working on a unified vocabulary so the same commands will do the same things with all three of the screen readers we’re supporting and progress on this task has been relatively swift. The DB team released the first set of Dragon Pro scripts to the beta team this past week and we’re eagerly awaiting feedback.

Window-Eyes, ZoomText Fusion and JAWS

As the NVDA scripting effort is nearing completion, we will soon be starting the scripting to support the other screen readers. We elected to do NVDA first in order that we had a functioning prototype on which to model the experience users will enjoy with DB with the other screen access utilities. Once the NVDA version of DB is completed, we expect the scripts for Window-Eyes, ZTF and JAWS to come along rapidly.

One Major Feature

We do have one feature on which we’ve not started working yet. This will either play a sound or provide some speech feedback when the user issues a Dragon or WSR command like “scratch that.” The current beta provides no feedback when a user issues such a command and the software will be nicer to use when this feature is completed and included in DictationBridge.

Errata

Prior to writing this update, we took a look at the last few articles published on this site. Unfortunately, we discovered that at least one of the older articles uses the term “ZoomText” to describe one of our target access technologies. In fact, we are not supporting ZoomText itself but, rather, we will be supporting ZoomText Fusion, a package that combines Window-Eyes and the popular ZoomText magnification package. We are sorry for any confusion this may have caused the readers.

Conclusions

The DictationBridge team is working very hard to get this important piece of software delivered to the public. We are always looking for more help testing the software so, if you would like to be added to our beta team, please send us an email at this address.