- Native multimedia: why, what, and how?
- Codecsthe horror, the horror
- Rolling custom controls
- Multimedia accessibility
- Summary
Multimedia accessibility
We've talked about the keyboard accessibility of the video element, but what about transcripts, captions for multimedia? After all, there is no alt attribute for video or audio as there is for <img>. The fallback content between the tags is only meant for browsers that can't cope with native video; not for people whose browsers can display the media but can't see or hear it due to disability or situation (for example, in a noisy environment or needing to conserve bandwidth).
The theory of HTML5 multimedia accessibility is excellent. The original author should make a subtitle file and put it in the container Ogg or MP4 file along with the multimedia files, and the browser will offer a user interface whereby the user can get those captions or subtitles. Even if the video is "embedded" on 1,000 different sites (simply by using an external URL as the source of the video/audio element), those sites get the subtitling information for free, so we get "write once, read everywhere" accessibility.
That's the theory. In practice, no one knows how to do this; the spec is silent, browsers do nothing. That's starting to change; at the time of this writing (May 2010), the WHATWG have added a new <track> element to the spec, which allows addition of various kinds of information such as subtitles, captions, description, chapter titles, and metadata.
The WHATWG is specifying a new timed text format called WebSRT (www.whatwg.org/specs/web-apps/current-work/multipage/video.html#websrt) for this information, which is one reason that this shadowy 29th element isn't in the W3C version of the spec. The format of the <track> element is
<track kind=captions src=captions.srt>
But what can you do right now? There is no one true approach to this problem, but here we'll present one possible (albeit hacky) interim solution.
Bruce made a proof of concept that displays individual lines of a transcript, which have been timestamped using the new HTML5 data-* attributes:
<article class=transcript lang=en>
<p><span data-begin=3 data-end=5>Hello, good evening and welcome.</span>
<span data-begin=7.35 data-end=9.25>Let's welcome Mr Last Week, singing his poptabulous hit &ldquot;If I could turn back time!&rdquot;</span>
</p>
...
</article>
JavaScript is used to hide the transcript <article>, hook into the timeupdate event of the video API, and overlay spans as plain text (therefore stylable with CSS) over (or next to) the video element, depending on the current playback time of the video and the timestamps on the individual spans. See it in action at http://dev.opera.com/articles/view/accessible-html5-video-with-javascripted-captions/. See Figure 4.6.
Figure 4.6 The script superimposes the caption over the video as delectable selectable text.
The BBC has a similar experiment at http://open.bbc.co.uk/rad/demos/html5/rdtv/episode2/ that takes in subtitles from an external JavaScript file http://open.bbc.co.uk/rad/demos/html5/rdtv/episode2/rdtv-episode2.js, which is closer to the vision of HTML5, but it doesn't have the side effect of allowing search engines to index the contents of the transcript.
Silvia Pfeiffer, a contractor for Mozilla, has some clever demos using HTML5 videos and some extra extensions (that are not part of the spec) at www.annodex.net/~silvia/itext/.