When you build an accessible website, do you aim for code that follows standards, or for a site that is actually usable by people with disabilities?
The obvious answer is ‘both’. The reason we follow web standards is largely to make our sites accessible to as many people as possible, and specifically to people with disabilities.
This is great in theory, but as we’re used to on the web, the practical reality is more complicated.
Writing for a better future
Remember ‘To Hell With Bad Browsers’? The Web Standards Project (WaSP) was founded at a time when there was no such thing as a standards-compliant browser. Table-based layouts were used to achieve visual consistency and code forking was required to get consistent behaviour.
The browser upgrade compaign was about using CSS layouts at a time when web designers were afraid of them, because many users still had old browsers. By separating content from presentation in 2001, WaSP wasn’t writing for the web as it was — it was writing for the web that it wanted.
What has this got to do with accessibility? I argue that the state of adaptive technology today can be compared to the state of the 4.0 web browsers. I’ll attempt to demonstrate this with two examples: image replacement and Ajax.
The current stalemate in font distrubution on the web led to innovative but non-ideal methods of embedding typography, like image replacement. The concept of image replacement is simple; use semantic markup for headings, but use CSS to hide the browser-generated text and show an image of fine typography instead. In this way devices which don’t undersand CSS, for example search engines, can still access the content, while CSS-aware graphical browsers get proper typography.
The theory was that screen readers would behave like search engines, by reading out the headings as if the image replacement had never happened. The CSS rules that hid the text were delivered in a “screen” media type stylesheet and so should not have affected the “aural” rendering of the page. It turned out that this assumption was wrong: Joe Clark demonstrated that most screen readers never read the hidden text, because they first rendered the document using a visual browser such as Internet Explorer, and then read out the resulting document tree.
Clark argued that this behaviour is justified because current screen readers are “multimodal” devices, while Dave Shea disagreed. Clark’s approach is well-intentioned, but I argue that it is the equivalent of an anti-standards viewpoint during the browser wars.
What is an accessibility-conscious web developer supposed to conclude? Presumably either that new approaches like Ajax are not accessible and are to be avoided, or that massive screen-reader testing is necessary to justify their use. I argue that both of these conclusions are misguided, and that if the majority of web developers accept them, they will have a negative impact on the future state of adaptive technology.
The return of angry laziness
The central question here is: how much time are we prepared to spend testing one particular user-agent’s behaviour? There are lots of variables: different screen readers, different software versions, complex user preferences and a range of proficiency among users. The current market-leading screen reader (JAWS) is an expensive piece of proprietary software, with no web developer help except for a demonstration version that requires frequent system reboots. And we’re only talking about one group of disabled users.
If it’s broken, it needs fixing
If we work around all the non-standard behaviour we can find, simply for the sake of people who are using broken technology, we’re not really helping them in the long run. Adaptive technology vendors aren’t going to start following standards unless we can demonstrate that their current implementations are not good enough; and the only way that’s going to happen is if they see modern standards-compliant code that doesn’t work. This is closely related to the recent WaSP debate about how to embed Flash: should user experience always come before web standards?
New and improved
Both Apple and Sun are working on accessible operating systems that could ultimately remove the need for separate screen reading software. I don’t know how well these technologies follow web standards at the moment, but perhaps developing websites as if adaptive technology followed web standards might inspire vendors to make it a reality?
Use the sting
The recent launch of WaSP‘s Assistive Technology Initiative is a positive step, but it needs to be accompanied by pressure. A good way to exert this pressure would be to continue the use of innovative techniques like image replacement and Ajax, making sure that we follow standards, but not insisting on 100% interoperability with assistive technology, if we can show that a lack of standards-compliance is the cause. The WCAG 2 debacle demonstrates that vendors don’t always act in the interests of the web and its users. If we want to stand up for accessibility, we need to show that we can still sting.