Archive

Multitouch

For the last thirty years people have been trying to come up with clever ways to make TV interactive. In the early 80s, we had Teletext services. We later had phone votes. These days digital TV users know that they can get more content – such as games, documentaries and commentary tracks – by ‘pressing the red button;’ whichever method they are using to watch TV.

On the other hand, more devices can be modified to act as remote controls for TVs. Eventually all phones will be able to interact with nearby TVs. They’ll start by being able to switch channels and record to a DVR. Soon TVs (and computers) will accept text and multi-touch input from phones and remotes.

Maybe it is time for those designing the future of TV to take into account the essential nature of watching content on TV. What makes it different from going to the movies? Or watching DVDs and downloaded movies on computers and phones? The fact that you watch TV with one or more people that you usually know well. Phones and computers are usually used by one person at a time (unless the computer is being used as a TV replacement). When you are at the cinema, you may be with hundreds of people, but you know no-one but those you came with and you don’t spend time during the movie interacting with anyone (unless your primary reason isn’t to watch the film…).

Given that before the invention of the remote, anyone who walked over to the TV had control, maybe it’s time to plan for TV broadcasts where each person watching can control and interact with TV content. Instead of using a child as proxy remotes, as I was, the person who usually holds the remote (still typically the man) should be encouraged to share with others.

The future could be made of every individual consuming media on their own terms – on their own. It’s the interaction between those watching TV that makes it special. If TV improves and changes those interactions, it will keep groups of people together for a long time to come.

Engadget pointed to a video on Vimeo that shows ‘the first major step in computer interface since 1984′:

They’re referring to the introduction of the Mac user interface (almost 25 years ago). That UI was a revision of the Lisa user interface for home users. The elements that made this work were the mouse, icons and overlapping windows. They were around for many years before 1984.

The stuff in this video is the equivalent of the generic concept of a pointing device. A 3-D mouse.

There is no next generation representational abstraction, i.e. a replacement for icons. The 2.5 D interface (the 0.5D being the layers of windows on screen) is now a 3D interface.

There’s no point having a multi-touch 3D mouse unless you have better ideas for what you’ll be manipulating with it. They even had to fake automatic keying of a truck and a man from a couple of shots that were then combined in a third. Anyone who has done that kind of keying and composition knows that you need to do a lot more than point at what you want to get things done. Just because you are compositing some 2D footage in a shallow-depth 3D-space doesn’t make the job of compositing that much more intuitive.

They didn’t even use eye-parallax – if you need to collaborate with others, you still need cursors. How twentieth-century of them…

In a recent article at InfoWorld, Neil McAllister reports that Microsoft have released a software development kit that shows how future applications can use a webcam input to replace a mouse or pen input. It works by recognising an object in your hand and tracking it as you move it across the screen.

There are upsides and downsides to not having a surface that you are touching when interacting with a user interface. The software will have great problems determining the equivalent of pressure. Mice have two levels of pressure: button pressed or button not pressed. Pen and finger-based devices can discriminate between many levels of pressure. The iPhone can tell how hard you are pressing its screen. That gives more options when it comes to interpreting what you want to achieve.

Alternatively, the advantage of an ‘air-based’ input technique is that you can deal with different scales of input. This is done simply a mouse: moving 3 mm using a mouse can move a cursor many pixels. If you run out of mouse mat, all you need do is pick up the mouse and move it to the middle of the mat again – as far as the computer in concerned, you haven’t moved the mouse at all. With pen- and finger- based interfaces, your gestures are always at a ratio of 1 to 1: you need enough space to move your pen or finger that matches your screen size.

A limitation of Microsoft’s ‘Touchless’ software is that it doesn’t track the operator’s eye. That means it must position a cursor showing you where your finger is. The advantage of eye tracking is shown here:

To prevent arm ache, moving objects across multiple large screens is a matter of moving your fingers closer to your eye. For more precise control, you can move your fingers closer to the screen. In the picture, the index fingers of the user’s hands are the same distance apart in each case, but define very different-sized areas on the screens shown. This fixes the problem of multi-touch scale.

To fix multi-touch pressure, there will have to be some sort of gesture that defines where in 3D space the virtual screen is. When needing to make big gestures like the upper picture above, you’ll need to define the screen as being close to your eye. When performing precise operations, you’ll to push the virtual screen further away. The ‘pressure’ will be calculated by the position of your fingers relative to the virtual screen.

The pressure problem will start to go away when we modify our user interfaces so that we are manipulating ideas more like clay than sheets of paper.

Here is a clip showing how realistic 3D rendering can be when the computer knows where your eyes are:

The catch is that the 3D effect doesn’t work for anyone else looking at the same screen. A 3D monitor will be needed for each viewer.

In multi-touch news, Apple has just been granted a patent for a devices that use an interesting arrays of sensors:

The touch sensing device also includes a plurality of independent and spatially distinct mutual capacitive sensing nodes set up in a non two dimensional array.

At first reading the invention seems to be about varying the number of sources of capacitance compared with various numbers of sensors. I think the interesting bit is mention of a “non two dimensional array.” If two dimensions is out, there are few other options. Zero- and one-dimensional arrays are unlikely. If Apple planned to make arrays with a number of dimensions above three, they would need a few more patents to cover the technology.

So Apple patents a three dimensional touch interface device… That’s more interesting. As I’ve posted before, that means if the interface device is away from the display device (for reasons of ergonomics or scale – ‘Minority Report’-style), you will be able to get feedback of where your fingers are hovering above the device you are about to touch. Take a look at my post on a user-interface convention using this feature on current applications: ‘not quite direct manipulation.’

On the subject of what multi-touch interfaces will be manipulating in the future…

A network of people, documents orĀ ideas

I have recently been working with a company that combines databases together to build ‘social networks’ – to model the way groups of people in society interact. This would be useful within organisations and projects too. If the connections within a project can be generated automatically, they’d be more useful…

I guess there’ll be some sort of three dimensional concept browser that will represent an individual’s model of their understanding and interaction with a project. Each member of the project would see a different view of the project. This is the sort of thing that will gain from direct (multi-touch-supported) manipulation.

…where is the user interface design for multi-touch systems?

My friend Jean sent me a link to a blog on the Microsoft Surface concept. Surface combines the power of multi-touch with table-based Space Invaders games of the early 80s. A couple of cameras monitor where the glass top is touched. That information is passed to a bit of software running on Windows.

Instead of talking about artists collaborating, how about thinking how the majority of people will benefit from multi-touch interaction. Most people read documents, write documents, calculate figures, look up information and make presentations. How will these activities be changed by multi-touch?

Follow

Get every new post delivered to your Inbox.

Join 3,483 other followers

%d bloggers like this: