Jacob Rossi: Building for adaptive input experiences
Jacob Rossi was talking about building for adaptive input experiences at Mobilism, these are my notes from his talk.
As the device eco system explodes the number of input types is also growing. We're still exploring these inputs. Point and click used to be the standard for inputs but now feels old fashioned - mostly thanks to touch. You're sliding your finger across glass but we want to make it feel like you are directly manipulating an object - make people forget they are touching glass. 50ms and 3mm is the UI response tolerance required for touch.
Performance is key - a delay dislocates expectations. If gesture handling isn't ongoing it's not really gesture handling.
Gestures aren't intuitive, if different things used different gestures then it's a secret handshake - stick to the defaults.
## Panning and zooming
This is handled extremely well by operating systems. Use native scrolling (
overflow: scroll) because browsers have heavily optimised this, certainly better than you ever will with a custom handler.
scroll-snap-points is a new property that will help with this, allows you to set points at which the browser will animate a stop. Can be used for a carousel but also for show-hide UI elements - hooks into CSS transitions and JS APIs.
Zoom, viewports and fixed elements
Fixed elements and zooming is based on a lot of assumtions from when the web was rarely zoomed. Fixed elements will now zoom based on a virtual - layout - viewport at 100% zoom. The page will then scroll within the zoomed - visual - viewport.
What if you want something fixed to the screen?
device-fixed is still experimental but would attach to the visual viewport not the layout viewport.
This collects different events (touch, click, pen &c.) into a single API that's as simple as the existing
pointerenter. You can choose to share events across inputs but also to break them out based on
event.pointerType if you need separate handlers.
Need to disable pan/zoom in an area that's looking for touch actions.
touch-action CSS property to tell browsers which actions you are allowing. This isn't in JS because we don't want to have to wait for the developer to
preventDefault or not before we know if we need to scroll or not. By putting it in CSS we get out of the slow single threaded JS - we want to avoid the browser's slow path and stay in the fast hardware accelerated visual display thread.
To handle multi-touch the
pointerId gives an identifier for each pointer source - not just multiple fingers but also mouse, pen &c.
We can also have device properties such as
Other arbitrary input devices can just plug into this spec.
Still being explored - there are no standards here yet. IE has a gesture object in JS that you feed inputs and it will fire gesture events. Would be good to be able to tie gesture events into CSS transitions - it's possible but there is complex matrix maths involved. By exposing an API surface browser vendors allow developers to link things at the right level for optimised performance.