Dictation Bridge Public Beta Now Available

The dictation bridge team is proud to announce the release of our first public beta. Over the past several months our team of excellent engineers, private beta testers, and technical writers, (the people responsible for creating documentation to guide you), have been working hard to bring you the world’s first NVDA add-on to allow screenreader users access to leading edge speach recognition software.

Our team is still working hard on bringing you access to JAWS and Window-eyes access and will be rolling that out shortly. In the meantime, please feel free to download the beta and try it with your favorite speech recognition package. This beta works with Microsoft speech recognition and Dragon NaturallySpeaking. Please be sure to let us know what you think and stay tuned for our next public beta and final products.

Please see below for your own copy of the beta and our product release notes. Thank you once again for your patience and support in our crowdfunding campaign and development of this free software.

Release notes for dictationBridge beta 1

Currently working features

  • Echo back of dictated text in Dragon and Windows Speech Recognition also known as WSR.
  • Speech only support of the WSR correction box.
  • Support to control NVDA from Dragon and WSR. At this time, only Dragon commands have been written.
  • A verbal notification of the microphone status when using Dragon. WSR has this feature built-in therefore no support needs to be created.
  • Command NVDA by voice from Dragon.

Feature Descriptions

Echo back of dictated text in Dragon and Windows Speech Recognition also known as WSR

When you dictate into any application with either Dragon or WSR running, dictationBridge will speak the text that has been recognized by either engine if the NVDA screen reader is active. There are no additional settings that have to be done. The functionality works out of the box. If you want to disable echo back, then disable the dictationBridge addon.

Speech only support of the WSR correction box

When you need to correct misrecognized text in WSR, do the following.
Position the cursor on the word you want to correct. Once your cursor is positioned, say “spell”. A floating window will come up on the screen. This window is invisible to NVDA and other screen readers. NVDA will read out the choices. You then say “choose” followed by the number of that choice; for example, if you wanted to select the fourth choice, you will say “choose 4”.

Support to control NVDA from Dragon and WSR

It is possible to execute any script in NVDA. This is done by sending the script name to dictationBridge which in turn sends it to NVDA. So, you can say “speak focus” to have NVDA say the contents of the keyboard focus. There are over 70 commands that have been added for Dragon. Commands for WSR have not yet been written. These commands, in the background execute NVDA scripts that have keyboard shortcuts assigned to them. These scripts are present in NVDA 2016.3. These script names should be the same with earlier versions of NVDA. To get this feature to work, you need to add the dictationBridge folder to the system path. Instructions for doing this will be included in the installation section.

A verbal notification of the microphone status when using Dragon

It is important to know if the microphone is active or not when using speech-recognition. This avoids the transcribing of unwanted text into your document. DictationBridge will now speak the status of the microphone as it is changed when using Dragon. WSR already indicates this status and there are distinct sounds mapped by default to microphone status changes.

Command NVDA by voice from Dragon

It is possible to control NVDA by speaking voice commands from dragon. The commands should have been imported into your user profile. If this has not happened, please see the section on importing them. These commands map to the majority of NVDA features giving you complete hands free control of this program. You can then use native Dragon commands as well to control other programs such as the Windows desktop, note pad and Microsoft Word.

Installing dictationBridge

This version of dictationBridge like the ones before is a NVDA addon. To install it, do the following.

  • Launch NVDA. Ensure that you are running an installed copy of NVDA. This addon has not been tested in a portable version of NVDA.
  • Download the Dictation Bridge addon
  • Once the addon has been downloaded, highlight the file and press enter on it.
  • Follow the prompts and allow NVDA to restart.
    The dictationBridge addon is now loaded.

Importing commands automatically

Once dictationBridge is installed, you will have an item in the tools menu called “Add Commands.” Activate this menu item and follow the prompts. It is crucial that you have already exported your commands that you want to import. These commands should be exported as XML files for this feature to work. If you have exported them as dat files, use the manual import feature from the Dragon command browser. You can do the same for XML commands.

Importing commands manually

  • Launch Dragon and ensure that your user profile is loaded.
  • Get to the Dragon bar and then to the tools menu.
  • Launch the command browser.
  • Hit alt+m to get to the Mode menu and arrow down to “Manage” and activate that option.
  • Hit ctrl+i, set the file type to XML and navigate to where you have saved the dictationBridgeCommands.xml file and highlight it.
  • Tab to the button labeled “Open” and activate it.
  • In Dragon version 15, you will be asked if you want to validate the commands. Agree to do this to ensure that the XML file is valid.
  • Tab through the resulting dialogue until you reach a button called “Import” and activate it.
    The commands should be imported.


After this procedure, it is crucial to have the following dynamic link libraries (DLLs) in the system path.
* DictationBridgeClient32.dll
* DictationBridgeClient64.dll

Making Dragon easier to use

There are a few options you need to configure in Dragon so that it works optimally when using a screen reader. You should make these changes immediately after creating your user profile.
Go to the Tools menu in the dragon bar and then to the Options menu item and activate it. This is a large multi-page dialogue and we need to change a few settings here. In Dragon 15, you will be unable to tab to the names of the property sheets in the dialogue. Use ctrl+tab to move forward to the next property sheet and ctrl+shift+tab to move backward to the previous property sheet.

On the Corrections tab, enable the options “Correct commands bring up Spelling Window” and “Spell commands bring up Spelling Window”

Navigate to the appearance tab and set the dragon bar to tray only mode.

Navigate to the miscellaneous tab and ensure that the option titled “Use menus that are compatible with screen readers” is checked.

Press “ok” to close the dialogue.

Known issues

The Dragon vocabulary editor is not usable as the list of words is not read by NVDA. We are waiting for the NVDA team to fix certain items before we can support this feature.

The training dialogue in Dragon is not fully supported. We are working on supporting this dialogue and hope to bring out this feature soon.

The word “New paragraph” may be heard in various situations. Please report when this occurs as we are actively tracking this issue.

Certain features such as the invoking of the WSR corrections dialogue may not work on versions of Windows that are set to a language other than English. This is because we have had to use English specific names to interact with portions of Dragon. We are working on enabling translations and hope to allow translators for your language to fix this problem soon.

Submitting bug reports

You should submit all bugs using the dictationBridge issue tracker. There are 3 components into which dictationBridge is split:
the NVDA add-on, the jaws scripts, and the dictationBridge core.

In order to file a bug report, you will first need to ensure that you have a Github account. When you navigate to one of the issue trackers above, Github will ask you to either sign in or create an account. If you have not done so already, create a Github account. Once this is done, you can sign in and file your bug report on the appropriate tracker.

One question that may arise is how to know where a bug is.
There is no accurate way to determine this without testing and seeing the code but here are a few general rules.
1. If you are using jaws and have a problem, you will file bugs in the jaws section.
2. If you are using NVDA, you will file bugs in the NVDA section.
3. If you are a developer and fancy cracking open a debugger, join the party and file bugs in the core.

Getting support and meeting other users

Please join the Discussion list for dictationBridge called dictation-discuss and talk to us.
list address:
subscription address:
Send a message with the word “subscribe” without the quotes in the subject line to the subscription address.

DictationBridge Update

In late June, the DictationBridge crowdfunding campaign came to a successful conclusion having raised its entire $20,000 goal with only 8 hours to spare in the effort. In the time since, the DB team has been working diligently to write and document the software, get the technical support team trained and have been fixing bugs as soon as they are reported by the people testing the software.

This piece intends to describe the current state of DictationBridge, what we’ve accomplished thus far and what we still need to do.

NVDA and Windows Speech Recognition

As per the schedule we published when we launched the DB crowdfunding project, we did the work to get the combination of NVDA with Windows Speech Recognition (WSR) running first. We are happy to report that this task is feature complete and are happy to share the software with anyone who requests a copy. There may be some bugs in the software which we will fix if and when they are reported to us but, as of this writing, NVDA with WSR is considered complete.

We will be starting the work to support WSR in Window-Eyes and JAWS relatively soon.

NVDA With Dragon Products

At this stage in our development, we are approaching feature complete on the software bridging NVDA and the Dragon line of speech recognition products. Most of the Dragon UI has been scripted, echo back is working properly and a number of other features are now accessible but we probably have another week or two of effort to call this component complete.

Dragon Pro Scripts

In addition to scripting the Dragon user interface, the DB team is currently in process of creating scripts for Dragon Pro to permit users to issue screen reader commands by voice when using DictationBridge. We are working on a unified vocabulary so the same commands will do the same things with all three of the screen readers we’re supporting and progress on this task has been relatively swift. The DB team released the first set of Dragon Pro scripts to the beta team this past week and we’re eagerly awaiting feedback.

Window-Eyes, ZoomText Fusion and JAWS

As the NVDA scripting effort is nearing completion, we will soon be starting the scripting to support the other screen readers. We elected to do NVDA first in order that we had a functioning prototype on which to model the experience users will enjoy with DB with the other screen access utilities. Once the NVDA version of DB is completed, we expect the scripts for Window-Eyes, ZTF and JAWS to come along rapidly.

One Major Feature

We do have one feature on which we’ve not started working yet. This will either play a sound or provide some speech feedback when the user issues a Dragon or WSR command like “scratch that.” The current beta provides no feedback when a user issues such a command and the software will be nicer to use when this feature is completed and included in DictationBridge.


Prior to writing this update, we took a look at the last few articles published on this site. Unfortunately, we discovered that at least one of the older articles uses the term “ZoomText” to describe one of our target access technologies. In fact, we are not supporting ZoomText itself but, rather, we will be supporting ZoomText Fusion, a package that combines Window-Eyes and the popular ZoomText magnification package. We are sorry for any confusion this may have caused the readers.


The DictationBridge team is working very hard to get this important piece of software delivered to the public. We are always looking for more help testing the software so, if you would like to be added to our beta team, please send us an email at this address.

DictationBridge: Our Gratitude


Last week, the DictationBridge Indie Go-Go campaign reached its entire $20,000 goal with about 8 hours left in a 60 day fundraising effort.

The DictationBridge team had 12 “official” members but the true story of our success is about the 130 separate contributors and the dozens upon dozens of individuals who helped tell the DictationBridge story on Twitter, Facebook, on blogs and on podcasts. Our entire team is incredibly grateful to all of those who helped us make history with this campaign.

The Establishment Joins Us

The single largest contribution to the DictationBridge campaign came to us from The Lighthouse For The Blind and Visually Impaired of San Francisco. As far as we can tell, this is the first time that a well established blindness oriented organization has participated in an effort like this one. Our entire team thanks the San Francisco Lighthouse, Brian Bashin, Josh Miele, Jennison Asuncion, Scott Blanks, and others at the Lighthouse for the Blind and Visually Impaired for they’re hugely helpful contributions to this campaign.

The DictationBridge publicity effort started at the Lighthouse in San Francisco when Josh invited Lucy Greco, a founding member of the DictationBridge team, to do a presentation about it at their Lighthouse Labs meeting a few weeks before the campaign went live. Throughout our 60 days of fundraising, we enjoyed receiving advice and encouragement from our friends at the SF Lighthouse and, in a big but unofficial manner, we have felt that the San Francisco Lighthouse has been part of the team since the beginning.

Help From Many Individuals

A crowdfunding project is largely an exercise in storytelling and getting the message to as many people as possible. If you look at the @DictationBridge Twitter timeline, you will undoubtedly see the names of a lot of individuals who took it upon themselves to help us promote our Indie Go-Go campaign.

We can’t list everyone by name but we are grateful to everyone who helped with as little as a single tweet or sent out DictationBridge related information frequently during the fundraising period. We’re grateful to all contributors, whether they donated five bucks or five thousand and we’re all feeling warmed by the outpouring of affection this campaign has received from the community.

We would, however, like to specifically thank Sandra Pilz and Paul Warner for giving us valuable feedback on our messaging and social media tactics. Sandra and Paul took it upon themselves to help coach us in this area and both were critical to our success. We’d also like to thank Jessie Lorenz, executive director of Independent Living Resource Center San Francisco, for her timely input on marketing and storytelling.

We express our deep appreciation to all of those who invited DictationBridge members to appear on their podcasts, to present at their events and to mention the project on their blogs. Without all of this public exposure, we’d have never made the goal and, as a result, those blind people who need dictation support but choose to (or must) use a screen reader other than JAWS or simply cannot afford the expensive current solutions would not be getting a free program that they desperately need.

The History Of The DictationBridge Project

DictationBridge, before it even had a name, began when Pranav Lal, Lucy Greco and Amanda Rush started working together to discuss making a free plug-in for NVDA to support dictation features. For Lucy and Pranav, dictation is not an option but, rather, an essential aspect of their computing experience.

Due to having multiple disabilities, many blind people (including some on the DictationBridge team) cannot type with any sense of efficiency or without experiencing a great deal of pain. The primary goal of the DB effort from day one has been to provide an affordable solution to as many of these people as possible. Thus, the war veteran who returns home with neither the use of hands nor eyes will have a computer they can use affordably, a blind person with bad repetitive strain injuries (RSI) now has a free solution and those blind people without a secondary disability have dictation at no extra cost to themselves.

In July, Chris Hofstader joined the team to take over the executive role on the effort. Together, Pranav, Chris, Lucy and Amanda would build out the amazing team of twelve that’s bringing DB to the world.

In August and September of 2015, Pranav and Chris tried to negotiate a licensing deal with a group in Germany to use their code as the core of DictationBridge. The German group wished to maintain proprietary source code which was a deal breaker for DB, as we were committed from the start to the values of an open source project. Chris then called Mike Calvo and they negotiated an agreement that permitted Serotek to license the dictation code from its SystemAccess screen reader in a manner compatible with our philosophy that a blind or otherwise disabled person should never be forced to pay a penny more than anyone else to use the same technology. The agreement with Serotek made history as it’s the first time a vendor of proprietary closed source assistive technology software has agreed to open up its source in exchange for a very modest licensing fee.

The next bit of history we made happened when the Lighthouse for the Blind and Visually Impaired of San Francisco made an official endorsement of and large monetary contribution to the DictationBridge campaign. Quoting Brian Bashin, CEO of the SF Lighthouse, “The Lighthouse believes it has a moral obligation to support the access needs of blind and visually-impaired people wherever they live.” During the discussions between the DB team and our friends at SF Lighthouse, one of the major goals we set was to ensure that a blind person in an emerging nation could buy a cheap laptop at a flea market and have full access to dictation features built into Windows, a goal we’ve never heard expressed by a large organization in the blindness space before. By collaborating with SF Lighthouse, the DictationBridge team built what we hope to be a long standing bridge between those of us in the free software community and at least one well established advocacy organization.

The nature of the DictationBridge team is also a first of its kind in the blindness business. The team is made up of people from two businesses (3 Mouse Technology and Serotek), a number of independent contributors and a coalition of activists in the blindness and technology world. The team has a number of members for whom dictation is a requirement and not a feature and we’ve some of the strongest engineering and management talent available in the world of accessibility. Assembling an ad hoc team like this one on which everyone works toward a common goal is unprecedented in this field.

In Conclusion

While the DictationBridge fundraising campaign is over, the team is working very hard to write, test, document and publish the software. Those on the private beta team will be seeing the first bits this week and the process of fixing bugs will begin. The DictationBridge project web site will be including a “Donate” button so those who had wanted to help the project financially but couldn’t do so before the campaign ended will have an opportunity to contribute to a fund that will help maintain and improve DB into the future.

Finally, as we wrote above, all of us on the DictationBridge team are sincerely and deeply grateful to the hundreds of people who helped us make this dream into a reality. This community is terrific and we all bow to you in thanks.

DictationBridge, opening the market for speech-recognition

There’s been quite a bit of discussion in the world of assistive technology about DictationBridge. As we wind up our fundraising efforts for the project we’d like to take a moment for some clarification. The team and the DictationBridge product has ignited a spark in the assistive technology arena. The team and the product have also taken some criticism.
A few people have taken the view that DictationBridge is about negating other products because the project has mentioned some downsides in other products. DictationBridge is not about bashing Freedom Scientific or any other assistive technology. There are commercial decisions screen reader manufacturers have made which have shaped the adaptive technology ecosystem. And, as end users, we have to live with the consequences of these decisions.
DictationBridge is focusing on what it brings to the table and about creating an inclusive environment. Our initial thought was to focus on NVDA. However, DictationBridge welcomes everyone. We are including Window-eyes, Zoomtext, and jaws users. DictationBridge is bringing speech-recognition support to these environments at no cost for end-users. This is something new in these ecosystems.
For that matter, DictationBridge is future proof. If another screen reader comes along, DictationBridge is architected such that it can be connected to that screen reader via standard Windows data sharing mechanisms such as COM.
DictationBridge is broadening the user base for speech-recognition. The first way it is doing this is by eliminating the price barrier. Instead of having to spend over $1500 on a solution, assuming you buy it from scratch, you spend $0 and have a working solution using Windows Speech Recognition (WSR) and NVDA.
The second barrier DictationBridge eliminates is one of screen reader preference. You may be a Window Eyes user, A JAWS user, an NV Access user, , or a ZoomText user. You may want to try speech-recognition with your chosen assistive technology. DictationBridge will give you the ability to do just that. You may decide to continue using the solution or decide to switch. Either way, there are no fees involved, no need to worry about renewing software maintenance agreements and no need to write funding applications to your state agency.

Let us take some examples.
James is a blind entrepreneur but injures his hand and is unable to type. He knows he has to continue working. He has heard of speech-recognition and decides to try it. He has a little bit of vision so he uses ZoomText for magnification and speech. In the current scenario, he does not have a solution. DictationBridge is going to be a generic solution which will talk to ZoomText and WSR or Dragon. Once James recovers, he may continue to use speech-recognition for productivity or he can resume a keyboard only way of working.
Take another example of Ram who has cerebral palsy, CP, and is blind. He lives in a developing country. He goes to a job fair and lands his first job at a hotel doing administrative work. Ram knows he has to type quickly but because of his CP he’s having trouble.
Today, he will have to request his employer buy additional software and spend over 100000 rupees to get him up and running and working productively. However, with DictationBridge, all he has to do is ask the IT team to install NVDA and DictationBridge.
Ram is a good worker and rises quickly to become general manager. He now has to use the hotel ERP system which has an add-on that works with Jaws. DictationBridge enabled Ram to begin his career. As a member of the management team he was able to smoothly transition to using DictationBridge with JAWS and Dragon.
DictationBridge was just the tool Ram needed for a chance to enter the workforce.

When Upgrading Is No Longer An Option

I’m Patrick Kelly, currently living in Shawnee, Kansas. I’m a huge technology nerd, and seem to be the go to guy for family tech support because of this. I am totally blind, and have cerebral palsy. I had a stroke when born as well, since I was born at 26 weeks. This has forced me to type one-handed. I can type 35 WPM last time I tested myself, but I tend to slow down a lot the longer I type.

I don’t currently use any adaptive keyboards or other assistive hardware to access the computer. I shied away from these types of devices since they have to be configured each time I want to use a different system.
I did however, use Dragon Naturally Speaking with Jaws for Windows 14 and J-Say 11.2, but when I upgraded to Windows 10, I was forced to go back to typing since I didn’t have the money to pay for upgrades to all 3 pieces of software to keep dictation working. I have found that Dragon meets my needs much better than Windows Speech Recognition.
Our state agency for the blind has no money, so asking for an upgrade to that software and Jaws and J-Say because of Windows 10 I knew wasn’t going to happen. When I saw Chris’s predictions blog that hinted at DictationBridge, I almost screamed with excitement.

With the project funding so far, I’ll be able to finally use Dragon with NVDA, which is my primary screen reader. I hope you’ll help by donating to the campaign, though, so that people in a similar situation to my own will be able to dictate, and also be able to use the screen reader they prefer as well.

San Francisco Lighthouse Endorses, Donates To DictationBridge

The Lighthouse for the Blind and visually impaired of San Francisco has announced its endorsement of and large monetary donation to the DictationBridge project, an ad hoc group of individuals assembled to develop a piece of software that blind and otherwise disabled people can use to better employ dictation software like Windows Speech Recognition (WSR) and the Dragon products from Nuance Communications.

“The overwhelming majority of blind people worldwide cannot afford expensive and unstable solutions when they need to use dictation and a screen reader,” says Brian Bashin, CEO of the San Francisco based Lighthouse for the Blind and Visually Impaired. He continues, “The Lighthouse believes it has a moral obligation to support the access needs of blind and visually-impaired people wherever they live. We applaud the creativity of the DictationBridge team to address this need and are happy to be part of their success.”

DictationBridge is raising $20,000 to develop software that permits blind people who want to use speech recognition technologies the ability to do so effectively, efficiently and in an incredibly cost effective manner. DictationBridge, if used with the NVDA screen reader and the built-in Windows Speech Recognition, will provide a solution that comes at no cost to the end users who will enjoy it in the future. In brief, DictationBridge sits between the speech recognition technologies from either Microsoft or Nuance and communicates with the user’s screen reader to permit seamless use of both technologies. To learn much more about the DictationBridge project, please visit the project web site.

Dictation: A Requirement For Many

Most computer users, especially when generating a lot of text, do so using a keyboard. For some people, including many blind people, using a keyboard, due to an additional disability can be tremendously painful and, in some cases, entirely impossible. DictationBridge will provide blind users of the three most popular Windows screen readers (NVDA, Window-Eyes and JAWS) with the ability to perform nearly every task they care to accomplish using voice commands and dictation. Thus, the veteran who came home from the battlefield blind and without hands will be able to use his computer effectively; the blind software engineer suffering from repetitive stress injuries (RSI) will be able to continue doing her job without experiencing terrible pain; the blind person with cerebral palsy will be able to use his computer to communicate with his online friends around the world. DictationBridge, as it uses a free software model, will be free and available at no cost to end users forever so a blind person can get themselves a $100 laptop in a flea market in an emerging nation and with NVDA and DictationBridge be able to use speech recognition at a cost they can easily afford.

DictationBridge and The Lighthouse for the Blind and Visually-Impaired Relationship

A few months before the DictationBridge team launched its crowdfunding campaign, Lucy Greco, DB project spokesperson, met with Josh Nealy. He invited her to present on the project at Lighthouse Labs, a tech think-tank operated by the Lighthouse for the Blind since 2011. The Lighthouse Labs audience was receptive and helped the team get the project launched. Thus, the Lighthouse for the Blind and Visually-Impaired of San Francisco has been involved helping the DB team recruit its membership and in promoting the effort beginning before the campaign even started. In fact, two important members of the DictationBridge team, Lucy herself and Erin Lauridsen, have been long time members of Lighthouse Labs and are frequent attendees at the events it puts on.

The Lighthouse for the Blind and Visually-Impaired in San Francisco is the epicenter of the accessibility community in the Bay Area. “We on the DictationBridge team are proud to have the Lighthouse for the Blind and Visually-Impaired on our team,” says Lucy Greco. “We hope this is a first in what will become a series of projects like this moving into the future.

A Little Disambiguation

The word “lighthouse” is used in the names of a large number of organizations serving people with vision impairment. While this is not obvious to the casual observer, no two of the organizations using “lighthouse” in their name are actually related in any way. Thus, this endorsement comes from The Lighthouse for the Blind and Visually-Impaired of San Francisco and may not reflect the opinions of other organizations serving the blind with “lighthouse” in their names.

DictationBridge: A Graduated Schedule


The DictationBridge crowdfunding campaign was launched in late April and, as of this writing, has raised approximately 26% of its $20,000 goal. The DictationBridge campaign uses the Indie Go-Go “flexible funding” model so the team will receive the dollars contributed to the effort even if we do not make our entire target.

The DictationBridge team has received a number of questions from individuals who want to know what we will do if we do not make the entire $20,000 goal.

It should be noted that each level is cumulative so everything that goes into a lower level of funding will also be in those that require a higher level of financing to accomplish.

$7500: The Basics

The DictationBridge team met and added up the costs to determine our minimum level to be able to produce a useful piece of software and we found that for $7500, we could develop the following:

  • Support for NVDA with Windows Speech Recognition (WSR).
  • An easy to translate package that others can bring to non-English languages.
  • Basic documentation describing how to install and use DictationBridge with WSR.

When we reach the $7500 milestone, the engineers will commence work on the real DictationBridge software (as opposed to the demo) released to the public. Those who joined the beta team by donating $100 or more will soon afterward start seeing incremental builds of what will be the released version of the software.

At this level, the single most challenging task is replacing the proprietary API hooking library used in the DB demo with the free and open source one used by NVDA itself.

$10,000: Dragon Comes To NVDA

At the $10,000 contribution level we will, in addition to the tasks at the previous levels:

  • Add support for the Dragon speech recognition from Nuance Communications to the main DB DLL.
  • Write NVDA scripts to support the Dragon user interface elements.
  • Expand the documentation to include instructions for using DictationBridge with the Dragon products.

This level has two relatively complicated tasks. Unlike WSR, the Dragon products use a variety of different APIs to insert text into a document, and while the DictationBridge demo already shows off Dragon working in edit and rich edit controls, we need to find which other API it uses for text in MS Word, in web controls and in other areas as well. The APIs used by Dragon aren’t documented publicly so the DLL related tasks will require Matt to poke around under the hood in Microsoft Windows to figure this out. The other time consuming task is writing the scripts to support the Dragon interface, including the screens used to train the Dragon software to better understand your speech patterns.

$12,500: Control NVDA with Speech Commands

At the $12,500 level, in addition to everything described above, we will:

  • Write Dragon Professional scripts to provide a way for DictationBridge users to issue spoken commands to operate their screen reader. We intend to include as many different NVDA commands as possible at this level.
  • Ensure that the Dragon Pro scripts are easy to translate into non-English languages.
  • Write the documentation detailing how a user can employ these commands when using DictationBridge.

At this level, the technical challenges aren’t too great as Dragon Pro scripts are fairly straight forward to write and provide a number of ways it can be used to communicate with the main DB DLL. These tasks, however, are time consuming and present a difficult set of test cases to determine if a bug is in the DB DLL, the Dragon Pro scripts, the NVDA Dragon scripts or NVDA itself. As there will be a large number of NVDA commands supported, this also creates a time consuming task for those writing the documentation.

$15,000: DictationBridge Comes To Window-Eyes and ZoomText Fusion

At the $15,000 level, in addition to everything described above, we will:

  • Write scripts for Window-Eyes (WE) and ZoomText Fusion (ZTF). It’s our understanding that this will be a single set of scripts as the scripting facilities in Window-Eyes and ZTF are identical.
  • Write the Dragon Professional scripts to allow for users to control WE and ZTF software with speech.
  • Ensure that these scripts are as easy to translate into non-English languages as possible.
  • Expand the documentation describing how to use WE and ZTF with DictationBridge.

The biggest challenges at this level are adding a second and third access technology to the package, writing the scripts for WE and ZTF, expanding the documentation and vastly expanding the test cases to ensure that DB is working properly with both NVDA and the two access technologies from AI Squared.

$17,500: DictationBridge Comes To JAWS

JAWS, according to the best data we have available today, remains the most popular screen reader in the world. But, as one can read in Matt Campbell’s description of the problem entitled, Inside DictationBridge, “the JAWS scripting facility is considerably less powerful than the more modern analogues available for NVDA and WE/ZTF. The biggest issue is that JAWS can query applications for information through any number of different techniques but it cannot be called by an external application so it’s impossible to notify JAWS that something has happened on the screen in an application that does not have focus.”

Both WSR and the Dragon products pop their UI onto the screen in a non-standard manner and leave the application in which the user had been working with focus. The only way to add a “global” script to JAWS requires altering the default scripts shipped with JAWS itself. The DictationBridge team, while it hopes to serve as many users as possible, has elected to not change the JAWS default behavior as doing so means that any changes made by Freedom Scientific or a third party (Doug Lee’s popular Skype scripts for instance) will require this part of the DB code to also be changed. Altering the default behavior of JAWS will also cause technical support issues that, based on any number of different factors, will be very hard to impossible to solve as using this technique will insert a large number of additional variables to the team’s test cases.

Thus, if we reach the $17,500 point, in addition to everything described in the sections above, we will:

  • Write JAWS scripts to support the Dragon line of products from Nuance.
  • Write Dragon Pro scripts to permit users to issue JAWS commands by voice.
  • Ensure that the scripts are easy to translate.
  • Expand the documentation to include how to use JAWS with DB and describe the limitations it has compared to the other screen readers we’re also supporting.

$20,000: The Gold Plated DictationBridge

If we reach our entire $20,000 goal, along with everything described above, we will:

  • Attempt to support WSR with JAWS.
  • Ensure the documentation is professionally written and that people have tested it for clarity.
  • Perform a higher level of professional level testing.
  • Possibly include features suggested from users “in the wild” that we on the DB team hadn’t thought up on our own.
  • Put a few dollars in the bank to pay developers to maintain DB if/when something like a new version of Windows, WSR or Dragon might cause failures.

What If We Fall Short Of The Minimum?

While the DictationBridge team is highly confident that we’ll reach the $7500 milestone, as of this writing, we had approximately $5200 in donations with roughly 40 days remaining in the campaign, so, although we think it’s highly unlikely that we will not reach the minimum, it is still a possibility. Thus, if we do not hit this milestone, we will:

  • Repay the debts we’ve incurred to get the dictation campaign launched. This amounts to approximately $2000 that we spent mostly to hire and train the technical support staff and to acquire some copies of Dragon, a copy of the 90 day test version of JAWS and a few other odds and ends. If we do not reach the minimum, nobody on the team will be compensated for their time and no additional expenses will be considered.
  • Donate the rest of the money (minus of course the Indie Go-Go fees) to the NVAccess Foundation to be used as Mick Curren and Jamie Teh, the amazing engineers behind NVDA, see fit. Nearly every member of the DictationBridge and 3 Mouse technology teams are NVDA users and we all trust that Mick and Jamie will use these dollars judiciously. NVDA is the only Windows screen reader to have shown marketshare growth in each of the past six years and is a tremendously valuable tool for blind people in nearly every corner of the globe. Obviously, our goal is to raise the entire $20,000 and deliver the gold plated version of DictationBridge as we believe it will add a powerful set of additional features to the different screen readers we intend to support but, if this proves impossible, we’re proud to help NVAccess in any other way we can.

In Summary

The DictationBridge team is committed to making as good a piece of software as the budget will allow. Fiscal realities, like being able to pay the developers, testers and those writing documentation have led us to create this collection of milestones so as the donating public can understand what each step will cost and what each milestone will allow us to build. Nearly everyone working on DictationBridge is a screen reader user so we all have a stake in its outcomes and we believe strongly that access technology for blind people should be designed and developed by members of our community and that blind technology professionals should be paid for efforts that benefit the entire community. This is a “for us, by us” project and we hope you choose to donate to the DictationBridge crowdfunding campaign before our June 20 deadline.

DictationBridge: a very personal project

For me DictationBridge is a very personal project. I have been using computers since the early 80s, and that entire time I’ve had to use the keyboard to not only type, but control what the output from the computer was. I am totally blind and rely on speech output from my computer to read what is on the screen. This means I am unable to use a mouse to point and click at objects or do things such as highlight and edit text. I use a type of software called a screen reader.

After about 15 years of using computers typing on various keyboards, my hands started to feel the stress. After long days of arrowing up and down through pages and doing all kinds of hand gymnastics on the keyboard, my hands were sore and swollen. In the early 2000’s I desperately tried to use various dictation products to no avail. I bought several copies of IBM ViaVoice and Dragon NaturallySpeaking, hoping that the next version would play better with my screen reader than the past one had.

Eventually about 10 years ago a product did come out that allowed me to use Dragon NaturallySpeaking with JAWS for Windows, a very expensive screen reader. All of a sudden my world opened up; I was able to control my computer with my voice and type my documents. I am a technology specialist and have worked with people with a variety of disabilities throughout my entire career. I’ve always had to stay on the leading edge of what is happening in technology to keep ahead of my clients. As cloud-based applications became the norm, the product I was using for dictation was no longer able to work for me. I found that NVDA, an open source screen reader, worked more effectively with the applications I had to use. Unfortunately I was no longer able to dictate using this product. As I had to get my work done and stay on the leading edge of technology, I gave up the ability to dictate.

A few months ago a few of us started talking about how we could add the ability to dictate to NVDA, and DictationBridge was born. As we really wanted this product to work with NVDA, a free screenreader, we thought it was critical that the product itself be free to everyone. So we turned to crowdfunding. I want to ask you to help us assure that all blind users have the ability to dictate and control their computers by voice. In today’s schools, offices, homes, and retirement communities, computers are becoming the norm. Why shouldn’t blind and visually impaired people have the same ability to use these computers in whatever way they want?

Let me describe a little what difference DictationBridge will bring. Currently if you use a screen reader with either Microsoft speech recognition or Dragon NaturallySpeaking, the screen reader is unable to tell you what text the product is typing. This means that a blind person will need to manually go back and review all the text that was created and review it for errors and then fix it. This is problematic in so many different ways. First of all you may be dictating because you need to reduce the amount of typing you’re doing due to hand pain like myself. Also, both Microsoft speech recognition and Dragon NaturallySpeaking learn from the corrections you make to the recognized text. If you are going back later and manually fixing any recognition problems, the speech recognition will not learn and may actually get worse at recognizing what you’re saying.

One of the other critical features we will be adding to DictationBridge is the ability to know if the microphone is on or off. A sighted person can see the lovely little icon of the microphone in different positions and different colors indicating if speech recognition is live or not. The screen reader user has to exit the document they are currently in, go down and find where the icon is with a succession of keystrokes, breaking their train of thought in the document. Then once they’ve confirmed that the microphone is live or not they will have to find their place in the document again. DictationBridge will give them an easy way to determine if the microphone is listening or not. This way, when someone is creating a document and the phone rings, they can answer it and be sure that their phone conversation doesn’t get transcribed into their document. These features are only the beginning of what we want DictationBridge to do. With your funding support we can allow a screen reader user to interact with the dictated text and also give them the ability to give some of their traditional keyboard commands to control their screen reader by voice. When DictationBridge is released, it’ll be free to anyone. By using NVDA, Microsoft speech recognition, and DictationBridge, the only cost to the blind user will be the price of their computer. This means that many more blind people will be able to turn to dictation than ever before. Thank you for your contributions to the DictationBridge project.

DictationBridge Prototype Now Available!

Welcome to the DictationBridge prototype

This is Proof-Of-Concept code, which means that it is used to show that what we claim is possible. Most of this code will not make it into production, or will be heavily modified.

The showcase is an NVDA add-on that does the following things:

  • Echos back recognized text when dictating while using Microsoft Windows speech-recognition or Dragon from Nuance.
  • Makes it possible for a user to interact using speech with the Windows speech-recognition corrections dialog.

Current Limitations

  • When using Dragon, the echo-back feature only works in rich edit controls; therefore no echo-back in web browsers or in Microsoft Office applications.
  • The Windows speech-recognition dialog is accessible only via voice.

Bug reports

Please no bug reports yet! This is not the testing Beta– that is coming later.

We appreciate feedback, however as stated before, this is Proof-Of-Concept code so we are not looking for bug reports at this time. It would be a waste to fix this code since so little of it will remain intact in the end-product.

Feature requests

We will take all suggestions. However, these may or may not be possible to implement, and this Proof-of-Concept code does not contain all the features we’ve already got planned for the future. The fastest way for you to get your suggestion implemented is to contribute to our crowd funding campaign.

Get the prototype for yourself

You can get a copy of the DictationBridge prototype for yourself by doing the following:

Download the NVDA addon to your computer. Then, run the add-on and confirm that you want to install it. Once this is done, restart NVDA, and you’ have access to the DictationBridge functionality.

My students and DictationBridge

I’m Erin, a member of the DictationBridge team working on documentation and social media for the project. When I first heard about DictationBridge, I was eager to get involved for many reasons, but some of the most compelling were the stories of my students during the years I worked as a full time assistive technology trainer.

If I had a dollar for every time I heard, “But Erin, can’t I just talk to the computer?” I could make a significant contribution to the DictationBridge funding campaign. Sometimes this question came during the frustration of learning to touch type, but often it came from a place of much more profound need. Elders in their eighties and nineties had stories they needed to pass down to their grand children and pushed themselves to master typing skills despite failing health, diabetic neuropathy, and hand tremors. Young people with dyslexia and other learning disabilities, some of whom had passed through the K12 education system without gaining basic literacy skills, excelled at performing most tasks on the computer, but slumped in frustration when it came time to write documents and emails. Individuals who had experienced strokes and other brain injuries faced the complex combination of cognitive and motor skills required to both remember commands and execute them on the keyboard. Busy and fatigued graduate students looked for an opportunity to give their hands a rest while composing long papers.

Assistive technology funding varies vastly throughout the world, and I can’t possibly speak to what support individuals receive in other regions. Only some of the individuals I described above received state funded technology support, and none of them received funding for dictation solutions. Often they used PCs handed down from friends or family members, and the NVDA screen reader allowed access to their computers without financial burden. when it came to dictation software however, I didn’t have a recommendation that was financially attainable. I know there are many blind computer users besides my former students without the right tools to tell their own stories, and I hope you will join me in supporting DictationBridge. The next time someone asks me if they can talk to their computer, I want to be able to say, “Yes!”