Aaron Gustafson: Where do we go from here?
Aaron Gustafson was talking about where do we go from here at Responsive Conf, these are my notes from his talk.
The web is capable of going anywhere and doing anything. RWD is the first time that the Dao of Web Design came into reality. It is progressive enhancemnt for visual design. We can't provide the same experience to everyone - there are just way to many variables. Every person is different and needs can be transitional.
The vision for the web is that it could be created once and accessed everywhere. A11y is about access - it's not a synonym for screen reader. People consume content in different ways. The language we use has an effect on how people access our site. Performance is an a11y concern. Provide good experience for everyone.
When using a phone the experience is different to desktop but you still need to do the same things. Start with a baseline that works everywhere then make it better if we can. We shouldn't be telling our users to change their browser, we should be working with what they chose. We can't control the world but we can control our reaction to it. Browser proliferation is a feature - we need to educate those around us that haven't realised this yet.
The veil of ignorance. People gravitate towards creating the most egalitarian experience when they don't know where they will fit into the hierarchy.
Allow people to chose the path they want to take. The path they take make vary - there's not necessarily even one path for one person.
We need to be aware of how easy it is to activate controls. MQ4 and pointer events can help us to determine the amount of control users have. Multi-modal interfaces are a challenge, don't rely on only one input type being present.
What will interfaces look like when we are browsing using only our gaze? Eye tracking is becoming more common - smartphones and watches are even doing it. Facial tracking can be used to control an interface. Entering text is hard using gaze so voice recognition is good for this. Exposing structured data within HTML should make it easier for voice to interact with a page.
If our pages don't make sense when read out then they don't make sense at all.
Headless voice based devices already exist - such as Amazon's echo. Voice will create new interfaces and experiences - we have an advantage on the web because we're used to dealing with voice as an enhancement to text. We can help to bridge the digital divide and literacy gap.