Luke Wroblewski: Interface Inputs
Luke Wroblewski (@LukeW) was talking about Interface Inputs at dConstruct these are my notes from his talk.
Jan 9, 2007. Speaker is 15min into his presentation. It's Steve Jobs introducing multitouch with the iPhone. "we're going to build on that revolutionary interface with software".
1984 the revolutionary interface was a Mac 128k. [Keynote on Luke's Mac dies]. The Mac had a keyboard and a cursor. Windows, icons, menus and pointers came out of this.
1993 saw the birth of the web. (These dates are when it hit the consumer market.) Mosaic came out and it was the first graphical web browser. Sites were still mouse and cursor interfaces. Click, right click, double click, scroll wheel on mouse. The keyboard is massively underused but you can do a lot. We've had 29 years to figure out the mouse and keyboard.
2007 saw the introduction of the iPhone and multi touch was the revolution in interface. Direct manipulation. Tap, double tap, swipe, pinch, gesutres &c. Have to consider the ergonomics of the finger. Best interfaces don't just ape the mouse and keyboard interface.
Every time there's a new interface we need time to adapt to this. We also need keep in mind the older interactions as well.
We then started getting access to sensors - ambient light, gps, microphone, accelerometer, camera. Amazon flow does augmented reality over the camera - you can buy from what's in front of you.
As we expand the input types we can expand the types of software we can build.
2011 saw more input. NFC, front camera, magnetometer (compass), gyroscope and more. We now have 9 axis motion sensing which is full 3D awareness. Voice gives us Siri and Google voice. Siri is a parallel interface to your device. NFC gives proximity data transfer. Front facing camera can allow you to keep the screen on as long as you are looking at it - your gaze is now and input type. iOS 7 can be controlled by tilting your head.
We now have 13 input types on a mobile!
It's not just mobile - games consoles have others like the XBox connect. Your whole body becomes an input type. Windows 8 has voice control. Devices don't have to work in isolation - how can we use the best of each together? We can create multi device interfaces.
2013 and these sensors are now in Ultrabooks (a.k.a. fancy laptops). We can now undock the screen to make things like device motion more useful. Leap motion can add gestures to pretty much any screen. The variety of devices and input types continues to grow.
Nest thermostat adds temperature and humidity sensors. The thermostat is controlled by our phones.
Google Glass has bone transducer to produce sound only you can hear. Motion detection turns head motion (like nods and even putting them on) into a controller. Winking can be configured as an input. You can move your head to pan round a zoomed website. Google used to be mouse and cursor - it's very different now.
We had 30 years to figure out keyboard and mouse. We've had six years to figure out two dozen new input types.
Wrist band with ECG to pick up your unique cardio rhythm to authenticate yourself. Add motion detector for gesture control.
Next iPhone might have finger print sensor. Different fingers can have different tasks bound to them. Latest Bluetooth allows you to target micro locations as input, low energy use. Proximity can be used to identify.
We can even use arbitrary objects to detect different touches. Door knobs, tables, hands, water. Potentially every object can be an input. The number of inputs in our world is potentially infinite.