Skip to content Skip to navigation

Results - General Trends

Each disability type and assistive technology tested revealed specific issues, which are detailed in the next section, but the following general trends that adversely impact multiple disability types did emerge:

  • Keyboard focus is not always visible
  • User interface elements are not visible in all circumstances
  • Modal windows allow users to interact with “locked” portions of the application
  • Users need to “explore” the user interface outside the standard interaction methods
  • Over dependency on shortcut keys
  • Inconsistent implementation across browsers
  • No ability to apply established Web accessibility standards
  • Saving user preferences for assistive technology
  • Not utilizing best practices in how assistive technologies interact with applications

Keyboard focus is not always visible

This is important because several assistive technologies rely on input using only a keyboard, or imitating a keyboard input. Navigating items through a keyboard requires being able to visually see where the “focus” is. The focus is a like a pointer that shows the user where their keyboard input will be directed, such as typing text or “clicking” a link.

User interface elements are not visible in all circumstances

“Visible” means different things in different contexts. When viewing the screen in high contrast mode, it means certain items became invisible because the items did not know how to display in that mode. For a screen reader user, it means the screen reader software was not aware of or could not detect the presence of certain user interface elements, and thus could not interact with them.

Modal windows allow users to interact with “locked” portions of the application

Modal windows are the windows that pop up within an application that require a user to interact with it and give some type of input before returning to the main application. A file chooser is an example of a modal window. The way Google Apps implements modal windows, it is quite easy for an assistive technology user to start interacting with the part of the page behind the modal window even while the modal window remains open. Once the user is outside of the modal window, at times it is almost impossible to get back in the window to select “OK” or “Cancel” to close the window. All the while the user thinks they are interacting with the window behind the modal window while not always realizing that their interactions are being ignored by the application. Often the only way to get out of this situation is to reload the page and restart the application.

Users need to “explore” the user interface outside the standard interaction methods

Google has defined a large set of shortcut keys to access all of the functionality of Documents and the Document List. Unfortunately, not all functions are tied to these shortcut keys, or they are implemented incorrectly. This necessitates the user trying other methods of accessing the functions. For keyboard-only users, they use the Tab key to jump from control to control. Screen reader users also must rely on the Tab key and other methods to accomplish some functions. Since these work flows are not the officially supported way of interacting with Documents or the Document List, discovering how to access each part of the user interface is often not intuitive and in some cases can get the user lost and unable to continue working without having to reload the whole page.

Over dependency on shortcut keys

Assigning shortcut keys to every function is one way to give users access to all parts of an application, however, relying on them too heavily can lead to using shortcut keys in instances where they are not needed. Both Web browsers and desktop applications have well established ways of navigating within and interacting with their respective applications and content. Users are familiar with these established methods of interaction and they should be built upon whenever possible in Documents and the Document List.

Inconsistent implementation across browsers

Google has stated that certain browser and screen reader applications are the officially-supported combinations for accessing Documents and the Document List. This is understandable to a degree based on the way Google is implementing accessibility through the Accessible Rich Internet Applications suite (ARIA), which is supported at varying levels in different screen readers and browsers. However, some of the assistive technologies tested should be accessible regardless of whether or not ARIA is used, and thus should not be dependent upon the user’s browser preference.

No ability to apply established Web accessibility standards

The Documents application does not allow one to develop accessible documents. Examples of Web accessibility standards that could not be applied to a document or document element include:

  • Alternative text for image
  • Table headers or other table accessibility information
  • MathML or LaTeX for math equations

Saving user preferences for assistive technology

Google requires JAWS users in Firefox to click a link titled "enable screen reader" in order to enable certain features just for screen reader users. If this link is not clicked, the user will not be able to use Documents. This is problematic because it is not the first link the user hears on the page so the user has to go look for it in each document. It takes pressing the Tab key 18 times after the document loads in order to find and activate this link. Additionally, if it is necessary to only enable certain functionality for screen reader users, this should be a profile-level setting so it is automatically set for users who need it.

Not utilizing best practices in how assistive technologies interact with applications

Based on trends observed so far in how Google is making Documents and the Document List more accessible to screen reader users, the direction of their accessibility implementation is troubling. There are well-established methods for assistive technology users to interact with computer applications, Web pages, and even Web-based applications. Instead of building upon these methods, Google is redefining how screen reader users interact with their computers. Google seems to be relying too heavily on shortcut keys to accomplish almost every task, thus requiring the user to memorize large sets of key combinations. Instead, they should make the user interface itself more explorable and navigable by the user.

Additionally, by concentrating on screen-reader specific solutions and not relying on best practices in accessible Web design, adding support for other assistive technologies might prove to be more difficult. It might require working on a separate solution for each assistive technology rather than one solution for all assistive technologies. Typically, in accessible Web design, the developer builds an application to a standard, independent of a particular assistive technology’s needs. The assistive technology vendors then know how to interact with the application, because they know how to interact with the standard. By defining different ways of interacting with the user interface for different assistive technologies, it will become necessary to support multiple user interfaces for the myriad of possible assistive technologies. It also might require working one-on-one with assistive technology vendors to implement custom solutions. This becomes very burdensome for maintaining applications and often results in certain user groups being left behind as support for their assistive technologies are still being developed while new features are introduced for everyone else.