On the same day as the iPhone 5 announcement, Apple also launched new iPods. I found the pricing of the new iPod Touch interesting. If Apple is launching new smaller iPads next month, the pricing for the new iPods will have taken iPad pricing into account.
Here’s the current pricing line up for the iPod and iPad range as of September 12, 2012:
There are a two obvious price points left unoccupied.
The new smaller iPad is expected to be popular with younger people for use at home and at school, a similar market definition to the iPod touch.
So, where will the new iPads fit in? Will they have a 64GB variant? As the new iPhone 5 has enough space for a LTE radio, will there be a cellular version of the new iPad?
Although there isn’t a 16GB 5th generation iPod touch model, it is likely that Apple will want a gateway 16GB iPad ‘light’ model.
Although everyone is expecting the low-end iPad ‘light’ to cost as little as $250, that doesn’t seem to fit with the pricing of the iPod touches announced a couple of weeks ago.
Given that, here’s my guess as to where the new iPads might fit:
Although $500 seems high for a 32GB iPad light with LTE, it is unlikely that Apple will want to sell it for much less than a 16GB Wi-Fi New iPad.
Maybe the Wi-Fi iPad light will cost $300, but $250 seems unlikely. If it was $250, the 16GB iPad 2 will be priced at $50 more than a 32GB iPad light, and the 16GB New iPad will be $50 more than the 64GB iPad light.
It’s not as easy as it first seems to fit the new smaller iPads into Apple’s price list.
Over the last few days there have been a couple of pieces of evidence that point to Apple launching a new version of the MacPro very soon – in time for their Worldwide Developer’s Conference next week.
What does this mean for Final Cut Pro X users, and users of other post-production software?
Many in the industry have accused Apple of giving up on professionals in order to go after the consumer dollar. The basis of this accusation is fact that the MacPro hasn’t been updated in almost two years and that Final Cut Pro X was launched without many features found in Final Cut Pro 7 and that it seemed to be designed for novice consumers.
My guesses as to why Final Cut Pro X was launched the way it was are for another time. My question is: What will it mean if Apple announces a new MacPro next week?
Embodiments of the invention are directed to a system, method, and software for implementing gestures with touch sensitive devices (such as a touch sensitive display) for managing and editing media files on a computing device or system. Specifically, gestural inputs of a human hand over a touch/proximity sensitive device can be used to control, edit, and manipulate files, such as media files including without limitation graphical files, photo files and video files.
Seems mainly about Apple getting a patent for gestures used to edit video on multi-touch devices. But I think the interesting phrase there is proximity sensitive device. That means we’ll be able to edit without touching a screen (or wearing special gloves).
Hidden in the middle of the patent are the following two sentences:
Finally, using a multi-touch display that is capable of proximity detection … gestures of a finger can also be used to invoke hovering action that can be the equivalent of hovering a mouse icon over an image object.
Ironically, one of the arguments against making Flash available on multi-touch devices is the fact that the majority of Flash implemented UI elements use the position of the mouse pointer without the mouse button being clicked as useful feedback to the user – a concept not possible using multi-touch. If devices included advanced proximity detection technology, then ‘mouseover’-equivalent events could be sent to Flash UIs – so they’d work they way they have since Shockwave and .fgd files.
Although granted yesterday, the patent was applied for in June 2007. In August 2007, I wrote about gestural edits that required the UI being able to detect fingertip position while not touching the screen.
For this new device to be useful, Apple needs to define how multitouch works when you don’t look at what you’re touching, but need to be accurate. At the moment, you can use MacBook touch pad gestures for a variety of commands (previous page, next page, scaling), but these instructions don’t require accuracy of fingertip positioning.
In order for us not to look at the ‘magical’ touchpad we’re using with our Macs, we need to know where our touches would land in the user interface of the current application if we touched the pad in a specific position. That means we can look at the monitors we already have, but still get the benefits of multitouch manipulation.
In August of 2007, Apple patented a multitouch interface for a portable computer that uses a camera to detect where a person’s hands are before they touch the trackpad or keyboard.
Now that we have a device for detecting where our fingertips are, Apple need to update the UI guidelines to allow for multiple cursors (one for each fingertip) and let fingers not touching the trackpad still send a ‘hover’ message to user interface items.
For example, they could use filled and unfilled circles to show where fingertips are. Unfilled to show where fingertips are hovering over the trackpad, filled to show contact:
In this Final Cut example, one fingertip is touching an edit, another is hovering over a different edit. To select more than one edit, editors hold down the option key and click additional edits. In a multitouch UI, the editor could hold down a fingertip on the first edit and tap the other edits to extend the selection:
The hovering fingertip circles could also show the context of what would happen if the user touched. Here’s an example from Avid:
Here the editor has their left hand over the multitouch trackpad. The index finger is touching, so its red circle is filled. As we are in trim mode the current cursor for the index finger is the B-side roller because it is touching a roller. The other fingers are almost touching. They are shown with unfilled circles with faint cursors that are correct based on where they are on the screen: the middle and ring fingers have arrow cursors, if the little (pinky) finger touches, then it would be trimming the A-side roller.
Ironically, once Apple does provide multitouch UI extensions to OS X, then the concepts of hovering, ‘mouseenter’ and ‘mouseleave’ can be added to Flash-based UIs for those using multitouch devices. Oh well!
The product manager for Windows 7 recently demoed its multitouch features. The video is over at The Inquirer.
The most positive thing about having a quickly animated globe was that the demo showed that you select a small part of a large database of information very quickly. I’m not sure what the value of rotating that view was.
Corel added some large icons and a single twist feature in a photo album. The software didn’t seem to respond to any kind of gestures – the touch was simulating a mouse only. The page turn looked unnatural and inhuman. The photo twist could have been achieved using a mouse in conjunction with a modifier key.
There were no cues to show what features responded to multitouch, and the content didn’t seem very directly manipulatable.
It would have been much more interesting if the demo had shown that you can use both hands to grab lots of pictures at a time and drag them to your album. Why drag one by one if you have a multi-touch screen? It is likely that Corel doesn’t have the engineering skills or resources to make this happen yet.
The most telling aspect was that this product manager sounded like he was walking on a tightrope – staying very close to the demo line and worried that at any moment he would fall off. To switch metaphors, it sounded as if he was worried that a kid would pipe up that the Emperor has no clothes.
This sort of demo makes it seem that Microsoft would rather some other software company shows them the way, so they can implement a version that is good enough for the majority. The history of computing so far shows that it is better to wait for others to innovate for the 20% thought leaders, so you can implement for the other 80%.
After seeing that video, you might get the idea that Microsoft have no chance, and they need to wait for Apple to come along and show them how it’s done so they can rip it off and sell it in Windows 7.X. It turns out that Microsoft have been doing interesting research on the subject and have been coming up with some useful principles.
He says why Office doesn’t run multi-touch, why tapping the screen isn’t the same as touching the screen, how form factor is a big problem, the danger of ‘ghost contacts’ and how Wacom came up with a key ergonomic improvement. Fun fact: Microsoft’s Surface product can detect 52 different touches at the same time. Amongst the highlights, he outlines five design principles and contrasts them with older systems:
You use Command Line Interfaces to manipulate text.
They are based on recall, you must direct them, there are many paths through the system to get to the many very specific commands you might want to use, your keyboard commands are words that perform operations on disconnected data, they are static.
You use Graphical User Interfaces to manipulate graphics
They are based on recognition, you can explore them, there are fewer routes you need to take and you have different ways to getting the system to do what you want, you control a mouse that acts as your agent to manipulate graphics, they are responsive.
You use Natural User Interfaces to manipulate objects.
They should be based on intuition, the correct tools are made available for each context, fewer methods to quickly achieve what you want to get done, you directly manipulate, they are evocative.
Check out the video, it’ll give you a much better idea of what multitouch away from iPhones might be like.
In which I consider whether platform defining software is more powerful than the inertia of a complex ecology of developers, software, hardware, support and marketplace.
The iPod system is an ecology that all competitors have found impossible to replicate and compete with. Better hardware features on mp3 players haven’t been enough, nor have different models for buying music, or involving social networks. These factors and more apply to iPhone. As well as the iPod factors, there is also the ease of application development, and a market for distributing applications.
The fact that the iPhone can be used to make and receive phone calls is just a way for Apple to make sure you always have their hand-held computer with you.
The inertia that competitors will have to fight against will be the comfort that people have with the user interface, the integration with their PCs and Macs, and the specific functionality of the apps that they’ve downloaded.
“Imagine Apple losing the multi-touch patent infringement – the whole iPhone empire would be in serious jeopardized.”
It would mean problems PR-wise, and give competitors courage, but I don’t think the whole empire would be in jeopardy. I think that others will still find it difficult to create alternative ‘ecologies’ that match iPhone.
Apple only have to worry when middle-class conversations go something like this: “I hear that someone created an Android app in their spare time that made a million dollars in a few months!” “Wow, I like the idea of that! I need to come up with an app too. I could find someone to help me, upload it, and wait for the money to come rolling in!”
The only thing that Apple need to worry about is a new platform-saving app appearing on other phones as well as the iPhone. The Apple ][ had VisiCalc, MS-DOS had 1-2-3, Macintosh had PageMaker, Windows has Office and Exchange. This was the original definition of ‘Killer App’ – what will the next killer app be for hand-held computing.
What is inherent about BlackBerry, Android, WebOS or Windows Mobile that will make the killer app start on one of these platforms first? If their owners change these systems to attract that killer app first, Apple might get the competition we are all hoping for.
In which I remind you of Apple’s concept videos from the 80s and suggest it is time for a new one.
In 1987 and 1988 Apple were still facing an uphill battle with businesses when it came to convincing them that graphical user interfaces were better than MS-DOS command-line interfaces. Part of their campaign to show that the Mac way of doing things was the start of the future of computing was to create speculative videos of how computers might evolve in ensuing years.
The Knowledge Navigator video was set in the far off year of 2010. In 1987 John Sculley suggested that if Apple defined an idea future for the Mac, it would be more likely for that future to happen. Commentators have theorised that Moore’s Law, the prediction for the rate of improvement of the amount of computer power at a given price, has galvanized technologists to do all they can to do better than predicted.
The year isn’t stated in the video, but the figures presented only run up to 2009, so I’m guessing this is set in 2010.
Presence, attention management, and multimodal communication are woven into the piece in ways that we can clearly imagine if not yet achieve. “Contact Jill,” says Prof. Bradford at one point. Moments later the computer announces that Jill is available, and brings her onscreen. While they collaboratively create some data visualizations, other calls are held in the background and then announced when the call ends. I feel as if we ought to be further down this road than we are. A universal canvas on which we can blend data from different sources is going to require clever data preparation and serious transformation magic.
Last week Stephen Wolfram announced that his next project is an online system that can take your natural language questions and compute answers for you. That reminded me of Apple’s Knowledge Navigator. I imagine it will be able to answer questions like:
“Is there a link between the size of the Sahara and deforestation on the Amazon rainforest?” “What if we bring down the logging rate to 100,000 acres a year?”
It’ll be a while until we have foldable screens, but it seems that if WolframAlpha can be made to work, we might be closer to the Knowledge Navigator, or what computers should be doing for us anyway.
In 1988 Apple made another video, one that is less famous, but much more accurate in its predictions. Which is another way of saying, if we were to make a video today about 2020, this is what we’d be predicting right now.
…or you can stay here and watch it encoded for YouTube:
You’ll see that some of the ideas are still be speculated today.
Microsoft especially likes the idea of real objects interacting with technology (as used in their Surface product). Microsoft has a video set 10 years in the future. It starts off with some impossible to implement stuff in a classroom, but continues with some good ideas:
(about that classroom, it’s all very well having augmented reality ideas (overlaying graphics onto the real world), but they only can work when there’s an audience of one – the display needs to take account of the position of the viewers eyes to line up the graphics in the right place. The kind of classroom telepresence shown at the start of the video would only work for one kid in each classroom at a time. For everyone else, the display would look odd and distorted. For more on this, see an older blog post.)
A much more realistic and specific Microsoft video was made in 2004, and set in 2010. You’ll see that their estimate of what we’ll be able to to in 20 months time:
On the subject of speculative videos, maybe we should start thinking of one for the creative industries. If collaboration is what makes TV and movies so satisfying, how will technology support media production in 2020? Or is the ultimate aim for 3D movies to spring out of people’s heads fully formed?
For the last thirty years people have been trying to come up with clever ways to make TV interactive. In the early 80s, we had Teletext services. We later had phone votes. These days digital TV users know that they can get more content – such as games, documentaries and commentary tracks – by ‘pressing the red button;’ whichever method they are using to watch TV.
On the other hand, more devices can be modified to act as remote controls for TVs. Eventually all phones will be able to interact with nearby TVs. They’ll start by being able to switch channels and record to a DVR. Soon TVs (and computers) will accept text and multi-touch input from phones and remotes.
Maybe it is time for those designing the future of TV to take into account the essential nature of watching content on TV. What makes it different from going to the movies? Or watching DVDs and downloaded movies on computers and phones? The fact that you watch TV with one or more people that you usually know well. Phones and computers are usually used by one person at a time (unless the computer is being used as a TV replacement). When you are at the cinema, you may be with hundreds of people, but you know no-one but those you came with and you don’t spend time during the movie interacting with anyone (unless your primary reason isn’t to watch the film…).
Given that before the invention of the remote, anyone who walked over to the TV had control, maybe it’s time to plan for TV broadcasts where each person watching can control and interact with TV content. Instead of using a child as proxy remotes, as I was, the person who usually holds the remote (still typically the man) should be encouraged to share with others.
The future could be made of every individual consuming media on their own terms – on their own. It’s the interaction between those watching TV that makes it special. If TV improves and changes those interactions, it will keep groups of people together for a long time to come.
Engadget pointed to a video on Vimeo that shows ‘the first major step in computer interface since 1984’:
They’re referring to the introduction of the Mac user interface (almost 25 years ago). That UI was a revision of the Lisa user interface for home users. The elements that made this work were the mouse, icons and overlapping windows. They were around for many years before 1984.
The stuff in this video is the equivalent of the generic concept of a pointing device. A 3-D mouse.
There is no next generation representational abstraction, i.e. a replacement for icons. The 2.5 D interface (the 0.5D being the layers of windows on screen) is now a 3D interface.
There’s no point having a multi-touch 3D mouse unless you have better ideas for what you’ll be manipulating with it. They even had to fake automatic keying of a truck and a man from a couple of shots that were then combined in a third. Anyone who has done that kind of keying and composition knows that you need to do a lot more than point at what you want to get things done. Just because you are compositing some 2D footage in a shallow-depth 3D-space doesn’t make the job of compositing that much more intuitive.
They didn’t even use eye-parallax – if you need to collaborate with others, you still need cursors. How twentieth-century of them…
In a recent article at InfoWorld, Neil McAllister reports that Microsoft have released a software development kit that shows how future applications can use a webcam input to replace a mouse or pen input. It works by recognising an object in your hand and tracking it as you move it across the screen.
There are upsides and downsides to not having a surface that you are touching when interacting with a user interface. The software will have great problems determining the equivalent of pressure. Mice have two levels of pressure: button pressed or button not pressed. Pen and finger-based devices can discriminate between many levels of pressure. The iPhone can tell how hard you are pressing its screen. That gives more options when it comes to interpreting what you want to achieve.
Alternatively, the advantage of an ‘air-based’ input technique is that you can deal with different scales of input. This is done simply a mouse: moving 3 mm using a mouse can move a cursor many pixels. If you run out of mouse mat, all you need do is pick up the mouse and move it to the middle of the mat again – as far as the computer in concerned, you haven’t moved the mouse at all. With pen- and finger- based interfaces, your gestures are always at a ratio of 1 to 1: you need enough space to move your pen or finger that matches your screen size.
A limitation of Microsoft’s ‘Touchless’ software is that it doesn’t track the operator’s eye. That means it must position a cursor showing you where your finger is. The advantage of eye tracking is shown here:
To prevent arm ache, moving objects across multiple large screens is a matter of moving your fingers closer to your eye. For more precise control, you can move your fingers closer to the screen. In the picture, the index fingers of the user’s hands are the same distance apart in each case, but define very different-sized areas on the screens shown. This fixes the problem of multi-touch scale.
To fix multi-touch pressure, there will have to be some sort of gesture that defines where in 3D space the virtual screen is. When needing to make big gestures like the upper picture above, you’ll need to define the screen as being close to your eye. When performing precise operations, you’ll to push the virtual screen further away. The ‘pressure’ will be calculated by the position of your fingers relative to the virtual screen.
The pressure problem will start to go away when we modify our user interfaces so that we are manipulating ideas more like clay than sheets of paper.
Here is a clip showing how realistic 3D rendering can be when the computer knows where your eyes are:
The catch is that the 3D effect doesn’t work for anyone else looking at the same screen. A 3D monitor will be needed for each viewer.