Archive

Multitouch

On the same day as the iPhone 5 announcement, Apple also launched new iPods. I found the pricing of the new iPod Touch interesting. If Apple is launching new smaller iPads next month, the pricing for the new iPods will have taken iPad pricing into account.

Here’s the current pricing line up for the iPod and iPad range as of September 12, 2012:

There are a two obvious price points left unoccupied.

The new smaller iPad is expected to be popular with younger people for use at home and at school, a similar market definition to the iPod touch.

So, where will the new iPads fit in? Will they have a 64GB variant? As the new iPhone 5 has enough space for a LTE radio, will there be a cellular version of the new iPad?

Although there isn’t a 16GB 5th generation iPod touch model, it is likely that Apple will want a gateway 16GB iPad ‘light’ model.

Although everyone is expecting the low-end iPad ‘light’ to cost as little as $250, that doesn’t seem to fit with the pricing of the iPod touches announced a couple of weeks ago.

Given that, here’s my guess as to where the new iPads might fit:

Although $500 seems high for a 32GB iPad light with LTE, it is unlikely that Apple will want to sell it for much less than a 16GB Wi-Fi New iPad.

Maybe the Wi-Fi iPad light will cost $300, but $250 seems unlikely. If it was $250, the 16GB iPad 2 will be priced at $50 more than a 32GB iPad light, and the 16GB New iPad will be $50 more than the 64GB iPad light.

It’s not as easy as it first seems to fit the new smaller iPads into Apple’s price list.

What do you think?

Over the last few days there have been a couple of pieces of evidence that point to Apple launching a new version of the MacPro very soon – in time for their Worldwide Developer’s Conference next week.

What does this mean for Final Cut Pro X users, and users of other post-production software?

Many in the industry have accused Apple of giving up on professionals in order to go after the consumer dollar. The basis of this accusation is fact that the MacPro hasn’t been updated in almost two years and that Final Cut Pro X was launched without many features found in Final Cut Pro 7 and that it seemed to be designed for novice consumers.

My guesses as to why Final Cut Pro X was launched the way it was are for another time. My question is: What will it mean if Apple announces a new MacPro next week?

Read More

Part of the art of writing patents is to protect concepts that might be used in future products without delineating them too clearly.

Case in point: Apple was awarded a patent yesterday: Gestures for controlling, manipulating, and editing of media files using touch sensitive devices. Here’s the abstract:

Embodiments of the invention are directed to a system, method, and software for implementing gestures with touch sensitive devices (such as a touch sensitive display) for managing and editing media files on a computing device or system. Specifically, gestural inputs of a human hand over a touch/proximity sensitive device can be used to control, edit, and manipulate files, such as media files including without limitation graphical files, photo files and video files.

Seems mainly about Apple getting a patent for gestures used to edit video on multi-touch devices. But I think the interesting phrase there is proximity sensitive device. That means we’ll be able to edit without touching a screen (or wearing special gloves).

Hidden in the middle of the patent are the following two sentences:

Finally, using a multi-touch display that is capable of proximity detection … gestures of a finger can also be used to invoke hovering action that can be the equivalent of hovering a mouse icon over an image object.

Ironically, one of the arguments against making Flash available on multi-touch devices is the fact that the majority of Flash implemented UI elements use the position of the mouse pointer without the mouse button being clicked as useful feedback to the user – a concept not possible using multi-touch. If devices included advanced proximity detection technology, then ‘mouseover’-equivalent events could be sent to Flash UIs – so they’d work they way they have since Shockwave and .fgd files.

Although granted yesterday, the patent was applied for in June 2007. In August 2007, I wrote about gestural edits that required the UI being able to detect fingertip position while not touching the screen.

I also wrote about Apple being granted a patent for using a camera mounted to a portable device to detect hand movement in three dimensions.

The latest Mac rumour is that Apple will announce a multitouch trackpad for desktop Macs.

For this new device to be useful, Apple needs to define how multitouch works when you don’t look at what you’re touching, but need to be accurate. At the moment, you can use MacBook touch pad gestures for a variety of commands (previous page, next page, scaling), but these instructions don’t require accuracy of fingertip positioning.

In order for us not to look at the ‘magical’ touchpad we’re using with our Macs, we need to know where our touches would land in the user interface of the current application if we touched the pad in a specific position. That means we can look at the monitors we already have, but still get the benefits of multitouch manipulation.

In August of 2007, Apple patented a multitouch interface for a portable computer that uses a camera to detect where a person’s hands are before they touch the trackpad or keyboard.

Illustration from Apple's 2007 Multitouch patent featuring a camera detecting where a user's hands are when not touching the trackpad.

Now that we have a device for detecting where our fingertips are, Apple need to update the UI guidelines to allow for multiple cursors (one for each fingertip) and let fingers not touching the trackpad still send a ‘hover’ message to user interface items.

For example, they could use filled and unfilled circles to show where fingertips are. Unfilled to show where fingertips are hovering over the trackpad, filled to show contact:

A screenshot from Final Cut showing a fingertip touching one edit and another hovering over a different edit.

In this Final Cut example, one fingertip is touching an edit, another is hovering over a different edit. To select more than one edit, editors hold down the option key and click additional edits. In a multitouch UI, the editor could hold down a fingertip on the first edit and tap the other edits to extend the selection:
Final Cut screenshot showing four edits selected

The hovering fingertip circles could also show the context of what would happen if the user touched. Here’s an example from Avid:
Mockup of multitouch UI extensions to an Avid screenshot.

Here the editor has their left hand over the multitouch trackpad. The index finger is touching, so its red circle is filled. As we are in trim mode the current cursor for the index finger is the B-side roller because it is touching a roller. The other fingers are almost touching. They are shown with unfilled circles with faint cursors that are correct based on where they are on the screen: the middle and ring fingers have arrow cursors, if the little (pinky) finger touches, then it would be trimming the A-side roller.

Once you can directly manipulate the UI using multiple touch points, you’ll be able to get rid of even short-lived modes. I wrote about gestural edits back in 2007.

Ironically, once Apple does provide multitouch UI extensions to OS X, then the concepts of hovering, ‘mouseenter’ and ‘mouseleave’ can be added to Flash-based UIs for those using multitouch devices. Oh well!

The product manager for Windows 7 recently demoed its multitouch features. The video is over at The Inquirer.
ms_mt_w7

The most positive thing about having a quickly animated globe was that the demo showed that you select a small part of a large database of information very quickly. I’m not sure what the value of rotating that view was.

Corel added some large icons and a single twist feature in a photo album. The software didn’t seem to respond to any kind of gestures – the touch was simulating a mouse only. The page turn looked unnatural and inhuman. The photo twist could have been achieved using a mouse in conjunction with a modifier key.

There were no cues to show what features responded to multitouch, and the content didn’t seem very directly manipulatable.

It would have been much more interesting if the demo had shown that you can use both hands to grab lots of pictures at a time and drag them to your album. Why drag one by one if you have a multi-touch screen? It is likely that Corel doesn’t have the engineering skills or resources to make this happen yet.

The most telling aspect was that this product manager sounded like he was walking on a tightrope – staying very close to the demo line and worried that at any moment he would fall off. To switch metaphors, it sounded as if he was worried that a kid would pipe up that the Emperor has no clothes.

This sort of demo makes it seem that Microsoft would rather some other software company shows them the way, so they can implement a version that is good enough for the majority. The history of computing so far shows that it is better to wait for others to innovate for the 20% thought leaders, so you can implement for the other 80%.

After seeing that video, you might get the idea that Microsoft have no chance, and they need to wait for Apple to come along and show them how it’s done so they can rip it off and sell it in Windows 7.X. It turns out that Microsoft have been doing interesting research on the subject and have been coming up with some useful principles.

Here is a better video featuring Microsoft Surface’s manager talking at a Microsoft developer event back in March about how to design software for multitouch and Natural User Interfaces:
ms_nui

He says why Office doesn’t run multi-touch, why tapping the screen isn’t the same as touching the screen, how form factor is a big problem, the danger of ‘ghost contacts’ and how Wacom came up with a key ergonomic improvement. Fun fact: Microsoft’s Surface product can detect 52 different touches at the same time. Amongst the highlights, he outlines five design principles and contrasts them with older systems:

ms_nui_5

You use Command Line Interfaces to manipulate text.
They are based on recall, you must direct them, there are many paths through the system to get to the many very specific commands you might want to use, your keyboard commands are words that perform operations on disconnected data, they are static.

You use Graphical User Interfaces to manipulate graphics
They are based on recognition, you can explore them, there are fewer routes you need to take and you have different ways to getting the system to do what you want, you control a mouse that acts as your agent to manipulate graphics, they are responsive.

You use Natural User Interfaces to manipulate objects.
They should be based on intuition, the correct tools are made available for each context, fewer methods to quickly achieve what you want to get done, you directly manipulate, they are evocative.

Check out the video, it’ll give you a much better idea of what multitouch away from iPhones might be like.

In which I consider whether platform defining software is more powerful than the inertia of a complex ecology of developers, software, hardware, support and marketplace.

The iPod system is an ecology that all competitors have found impossible to replicate and compete with. Better hardware features on mp3 players haven’t been enough, nor have different models for buying music, or involving social networks. These factors and more apply to iPhone. As well as the iPod factors, there is also the ease of application development, and a market for distributing applications.

The fact that the iPhone can be used to make and receive phone calls is just a way for Apple to make sure you always have their hand-held computer with you.

The inertia that competitors will have to fight against will be the comfort that people have with the user interface, the integration with their PCs and Macs, and the specific functionality of the apps that they’ve downloaded.

Over on geek.com Christain Zibreg says

“Imagine Apple losing the multi-touch patent infringement – the whole iPhone empire would be in serious jeopardized.”

It would mean problems PR-wise, and give competitors courage, but I don’t think the whole empire would be in jeopardy. I think that others will still find it difficult to create alternative ‘ecologies’ that match iPhone.

Apple only have to worry when middle-class conversations go something like this: “I hear that someone created an Android app in their spare time that made a million dollars in a few months!” “Wow, I like the idea of that! I need to come up with an app too. I could find someone to help me, upload it, and wait for the money to come rolling in!”

The only thing that Apple need to worry about is a new platform-saving app appearing on other phones as well as the iPhone. The Apple ][ had VisiCalc, MS-DOS had 1-2-3, Macintosh had PageMaker, Windows has Office and Exchange. This was the original definition of ‘Killer App’ – what will the next killer app be for hand-held computing.

What is inherent about BlackBerry, Android, WebOS or Windows Mobile that will make the killer app start on one of these platforms first? If their owners change these systems to attract that killer app first, Apple might get the competition we are all hoping for.

In which I remind you of Apple’s concept videos from the 80s and suggest it is time for a new one.

In 1987 and 1988 Apple were still facing an uphill battle with businesses when it came to convincing them that graphical user interfaces were better than MS-DOS command-line interfaces. Part of their campaign to show that the Mac way of doing things was the start of the future of computing was to create speculative videos of how computers might evolve in ensuing years.

The Knowledge Navigator video was set in the far off year of 2010. In 1987 John Sculley suggested that if Apple defined an idea future for the Mac, it would be more likely for that future to happen. Commentators have theorised that Moore’s Law, the prediction for the rate of improvement of the amount of computer power at a given price, has galvanized technologists to do all they can to do better than predicted.

The year isn’t stated in the video, but the figures presented only run up to 2009, so I’m guessing this is set in 2010.

It is interesting to see how close we are to this kind of interaction with our technology. In 2003 Jon Udell revisited this video and commented

Presence, attention management, and multimodal communication are woven into the piece in ways that we can clearly imagine if not yet achieve. “Contact Jill,” says Prof. Bradford at one point. Moments later the computer announces that Jill is available, and brings her onscreen. While they collaboratively create some data visualizations, other calls are held in the background and then announced when the call ends. I feel as if we ought to be further down this road than we are. A universal canvas on which we can blend data from different sources is going to require clever data preparation and serious transformation magic.

Last week Stephen Wolfram announced that his next project is an online system that can take your natural language questions and compute answers for you. That reminded me of Apple’s Knowledge Navigator. I imagine it will be able to answer questions like:

“Is there a link between the size of the Sahara and deforestation on the Amazon rainforest?” “What if we bring down the logging rate to 100,000 acres a year?”

It’ll be a while until we have foldable screens, but it seems that if WolframAlpha can be made to work, we might be closer to the Knowledge Navigator, or what computers should be doing for us anyway.

In 1988 Apple made another video, one that is less famous, but much more accurate in its predictions. Which is another way of saying, if we were to make a video today about 2020, this is what we’d be predicting right now.

A OK quality video can be found at http://www.mprove.de/uni/asi/futureshock.html

…or you can stay here and watch it encoded for YouTube:

You’ll see that some of the ideas are still be speculated today.

Microsoft especially likes the idea of real objects interacting with technology (as used in their Surface product). Microsoft has a video set 10 years in the future. It starts off with some impossible to implement stuff in a classroom, but continues with some good ideas:

(about that classroom, it’s all very well having augmented reality ideas (overlaying graphics onto the real world), but they only can work when there’s an audience of one – the display needs to take account of the position of the viewers eyes to line up the graphics in the right place. The kind of classroom telepresence shown at the start of the video would only work for one kid in each classroom at a time. For everyone else, the display would look odd and distorted. For more on this, see an older blog post.)

A much more realistic and specific Microsoft video was made in 2004, and set in 2010. You’ll see that their estimate of what we’ll be able to to in 20 months time:

On the subject of speculative videos, maybe we should start thinking of one for the creative industries. If collaboration is what makes TV and movies so satisfying, how will technology support media production in 2020? Or is the ultimate aim for 3D movies to spring out of people’s heads fully formed?

Follow

Get every new post delivered to your Inbox.

Join 3,483 other followers

%d bloggers like this: