In honour of Global Accessibility Awareness Day (GAAD) today I’m throwing this method out in to the ether that is the web. However, it’s not the quote/ unquote “technique” I’m offering — in the sense I really expect anyone will use it. Rather it’s my aim to try and get people thinking about the content they consume and produce on and for the web, period. And thinking a little differently about said web content.
After all, that’s the point of going through the effort of raising awareness. To think about anything in a manner which you aren’t typically conditioned to think about them. Or in other words, it’s not so much the result I’m most interested in here, it’s the reasons for and process that give us that result. It’s my hope to draw some attention towards automatic text transcriptions of audio only podcasts, specifically.
And I’m aware such a solution is still a ways off from being practical — as in reliably useable. But it’s never too early to entertain prospects. And experiment. Read “Automatic audio text transcriptions” in its entirety
I’ve spent some time over the past few months thinking about how I craft the content I publish for the web. Specifically regarding my use of language when writing. In one certain context — not to suggest my writing is free from more problems in others — it’s not as inclusive as it should be.
I’m referring to how a screen reader user experiences the words I write. And with my limited use of the technology, I’ve taken note of something quite specific. If you use a screen reader to speak my words, I’m not sure you, as a listener, will get all of the “subtleties” (case in point) of my intent.
Using the example I cited immediately above, precisely how is a screen reader user supposed to know I’ve put the word “subtleties” in quotation marks? Just typing quotation marks before and after the word isn’t enough to make a screen reader speak them. Read “Language is a curious beast, ain’t it?” in its entirety
An interested party left a comment on a post I wrote back in November of last year, called The frustrations of VoiceOver. The commenter wondered if the situation I described in said post was the same for VoiceOver in Safari on iOS (meaning on both the iPhone and iPad). Problem being, I had a one helluva time testing the “bug” with VoiceOver on iOS.
Long, and somewhat uninteresting (for the scope of this piece, at least), story short, I was able to clear the biggest impediment I had toward testing this quirk in iOS yesterday. How do I even turn VoiceOver on to test? It, as in iOS, will not recognize my double taps when it asks for confirmation for turning VoiceOver on. “Is this really what you want to do? iOS’s gestures change when VoiceOver is turned on” (I’m quoting from memory, it’s more than likely that isn’t what it says). So I put a call out on Twitter asking how I might overcome this.
Although the solution isn’t all the intuitive to discover on one’s own, that doesn’t necessarily make any solution any less liberating or powerful. Read “The Split Tap” in its entirety