Efforts to improve the user interface have largely focused on making the content of what can be clicked on, or typed into, more intuitive. Although this field of user-experience design has created operating systems, websites and devices that are ever more user friendly, they still remain stubbornly tied within this traditional relationship between human and computer.
Well, hang onto your mousepads because everything is about to change. We are currently going through the most significant evolution in the way we work with computers since Steve Jobs launched the Macintosh in 1984.
Here are the top seven user interface revolutions that were recently only the domain of science fiction. Each one listed here is now either a real product, or at least in prototype, and coming to a desktop near you in the next 10 years. Let's look at what is happening now, what is coming in the next two to three years, three to five years, and five to10 years and beyond.
So perhaps it's a little hasty to dismiss touch as merely a replacement for a mouse. While this was definitely true in the 90's, Apple's introduction of multi-touch (swiftly copied by other phones and tablets) changed everything.
By now the magical experience of swiping and pinching which we have all experienced on the iPad, as well seen on Microsoft's Surface table, is commonplace. But touch is finally set to come to all our computing devices. The next generation of monitors and laptops will have this as a standard feature.
2. Gesture control
The Nintendo Wii introduced the world to the idea of gesture-based user interfaces. That was followed by the Microsoft Kinect, currently the fastest selling consumer electronics device in the world. But these are gaming accessories - how have they changed personal computing?
A new class of gadgets - including a PC-version of the Kinect itself, but also most compellingly seen in the Leap Gesture Control - promises to break the mouse-keyboard partnership. The Leap is a tiny box, no bigger than USB stick that can detect a full range of hand motion including each finger individually.
This is a highly visceral way of engaging with a computer because it mimics how we engage with everyday objects. Grabbing, pulling, flicking, swiping, pushing - all of these are now viable and programmable computer instructions. Perhaps the greatest concern is erasing your work by sipping your tea.
|The Microsoft Kinect in action|
Voice control can be viewed as sci-fi or mundane. In the former, we remember Dave trying to persuade HAL to open the pod bay doors or Captain Kirk asking the computer to locate Mr Spock. For the latter we have the experience of trying to convince our cellphones to "phone Candy" and ending up having an awkward conversation with Sandy.
Siri - released with the Apple iPhone 4S - has improved voice instructions to the point that many common functions are now reliably handled by voice commands. It also has a sense of humour which has charmed its users.
However most voice recognition, and certainly speech synthesis in response, is still not quite good enough. But it's close. The latest Mac OSX has dictation built in. And Windows 8 isn't far behind.
Voice is good for dictation but it's also great for any hands-free environment. That includes phones but also a new range of portable, wearable and everyday devices that are also just around the corner.
Heads Up Displays (or HUD's) have been around for some time, popularised through many Hollywood films about fighter pilots.
The prototypical consumer "heads up display" came with the advent of apps like Layar for the iPhone where holding up the phone overlayed an informational layer between the viewer and the world using the phone camera.
But Google Glass, one of the first major forays of the Internet giant into an Apple-like hardware play, takes things to the next level. These glasses put a computing film between the wearer and the world, with an almost endless array of potential applications such as giving directions, annotating physical locations with recommendations and regional information, and letting you answer your email while walking.
One step further is contact lenses that do the same thing without having to don a new accessory.
Voice instruction would seem to be an essential pairing with this technology. But it's here.
|Google co-founder Sergey Brin touts the Project Glass computerised glasses.|
5. Wearable computers
From being something on the desk, to being something you can carry in a bag to in your pocket - computers have dramatically changed in the past ten years.
They're going through another revolution now into things you can wear. The previously mentioned heads up display is just one example. A wearable computer could track your every step, know your location, allow you to project an image onto a nearby surface, play music or make a payment in-store.
This is the next generation of cellphone - the final break from the "phone" metaphor to something completely new. Phoning will still exist, but instead of a handset you will wear the computer on different parts of your body and its functions will be integrated with your daily activities.
If this sounds fanciful, look at the Nike Fuelband that last year became one of the top selling gadgets in the world. It's a tiny wearable computer that measures physical activity and helps you achieve your fitness goals.
|The Nike Fuelband measures physical activity and helps the user achieve their fitness goals.|
True holograms, projected with lasers, are too expensive and energy-hungry to be possible anytime soon. But recent innovations have seen faux-holograms (like the magical appearance of Tupac Shakur, who died 15 years ago, live on stage) step into view (http://www.youtube.com/watch?v=TGbrFmPBV0Y
Various technologies offer the possibility to project an image into 3D space, creating the illusion of a three dimensional object. Or, at the very least, a three dimensional video image which would be perfectly acceptable for a meeting where the other parties appear three dimensionally around the table.
Musion - the company behind the Tupac appearance- has already partnered with Cisco to show off what a holographic conference presentation might look like.
It's not quite the real thing, but it's compelling and probably good enough to start becoming commonplace. A computer image that is no longer contained by a screen but appears to float in mid-air and can be walked around, viewed from many angles and manipulated in three dimensions would break another part of the paradigm we have become used to - the monitor.
As insane as it sounds, prototypes for connecting the human brain directly to computers have existed for some time. Recent reports say researchers at Caltech are working on a MEMS-based robotic probe that can implant electrodes into your brain to interface with particular neurons.
The immediate application of this is for disabled people to control robotic limbs, get direct input from computerised eyes or ears and produce speech. But it's a small leap to see how you could apply this to all computing applications. Think "search recipe for pasta" and a pasta recipe appears on your heads up display.
Even more intriguing is the idea that you could mimic telepathy by sending messages to someone else, simply by thinking it.
A lot more progress needs to be made in understanding the brain before these kinds of complex interactions will be possible but basic actions - like opening your garage door simply by thinking about it - will be possible in the relatively near future.
The mouse and the keyboard and more recently, some more sophisticated gaming controllers, have defined how we have interacted with the machines we have made for decades. We are on the brink of a new era. This era will be marked by a more intimate and pervasive kind of engagement between humans and computers. They are becoming extensions of us, enhancing our bodies and extending our minds.
The future is upon on us. And you thought choosing the navigation structure for your website was tough.