Archive

User interface

When Steve Jobs launched the iPhone 4 in March 2010, one of the big new features was a much higher resolution screen. The iPhone 3GS 3.5″ screen displayed 320 by 480 pixels. The new phone displayed 640 by 960 pixels in the same space. The number of pixels displayed per inch increased from 163ppi to 326ppi.

Developers didn’t have to change the layouts of their applications to run on the new phone. Instead of displaying 320 by 480 apps at half the size on the 640 by 960 screen, the new version of iOS used twice the pixels horizontally and vertically to show the same content. Since then developers have designed their iPhone apps to work on 320 point wide screens even though the vast majority of users will see them on 640 pixel wide screens.

Apple marketed this new screen as a ‘Retina display’. Apple later said that the definition of a  Retina display is a screen where individual pixels cannot be distinguished at a normal viewing distance. In the case of the new phone, it would have to have a screen resolution of at least 300ppi when used at a distance of 10-12 inches. The combination of figures is summarised as ‘Pixels per Degree’ – the number of pixels per degree as seen from a specific distance. 300ppi at 10″ equates at a PPD of 53. The iPhone resolution of 326ppi at 10″ has a PPD of 57.

Minutes after the launch, Apple fans started speculating when other products would get a Retina display update. The top of the range iPod Touch followed in September 2010. The iPad got a Retina display in March 2012, followed soon after by Retina MacBook Pros in June 2012.

With every new launch event, may hope for a Retina display as part of the specifications for new Apple products. But what does Retina mean for iMacs and desktop displays?

Read More

Advertisement

In August 2010 (almost a year before the introduction of Final Cut Pro X) Apple applied for a user interface patent that is relevant to colour correcting video clips. They were awarded patent 8,468,465 today.

Although Apple has chosen a different UI for colour correction in Final Cut Pro, the UI shown in this new patent may turn up in future Apple applications.

Abstract

Some embodiments provide a computer program that provides a graphical user interface (GUI) for controlling an application. The GUI includes a contiguous two-dimensional sliding region for defining several values. The GUI also includes several sliders for moving within the sliding region. Each slider selects one or more values from the several values based on a position of the slider within the sliding region. The selected values are parameters for controlling one or more operations of the application.

Excerpt

2D-sliders_fig_16

Read More

Last week, Bret Victor posted A Brief Rant on the Future of Interaction Design, a must-read essay for those who think that the future of technology interaction will be primarily spent stroking flat panes of glass:

[T]ake out your favorite Magical And Revolutionary Technology Device. Use it for a bit.

What did you feel? Did it feel glassy? Did it have no connection whatsoever with the task you were performing?

I call this technology Pictures Under Glass. Pictures Under Glass sacrifice all the tactile richness of working with our hands, offering instead a hokey visual facade.

[…]

With an entire body at your command, do you seriously think the Future Of Interaction should be a single finger?

Perhaps he is right. Minority Report‘s computer interactions have been very distracting for many OS user interface designers.

Read More

Part of the art of writing patents is to protect concepts that might be used in future products without delineating them too clearly.

Case in point: Apple was awarded a patent yesterday: Gestures for controlling, manipulating, and editing of media files using touch sensitive devices. Here’s the abstract:

Embodiments of the invention are directed to a system, method, and software for implementing gestures with touch sensitive devices (such as a touch sensitive display) for managing and editing media files on a computing device or system. Specifically, gestural inputs of a human hand over a touch/proximity sensitive device can be used to control, edit, and manipulate files, such as media files including without limitation graphical files, photo files and video files.

Seems mainly about Apple getting a patent for gestures used to edit video on multi-touch devices. But I think the interesting phrase there is proximity sensitive device. That means we’ll be able to edit without touching a screen (or wearing special gloves).

Hidden in the middle of the patent are the following two sentences:

Finally, using a multi-touch display that is capable of proximity detection … gestures of a finger can also be used to invoke hovering action that can be the equivalent of hovering a mouse icon over an image object.

Ironically, one of the arguments against making Flash available on multi-touch devices is the fact that the majority of Flash implemented UI elements use the position of the mouse pointer without the mouse button being clicked as useful feedback to the user – a concept not possible using multi-touch. If devices included advanced proximity detection technology, then ‘mouseover’-equivalent events could be sent to Flash UIs – so they’d work they way they have since Shockwave and .fgd files.

Although granted yesterday, the patent was applied for in June 2007. In August 2007, I wrote about gestural edits that required the UI being able to detect fingertip position while not touching the screen.

I also wrote about Apple being granted a patent for using a camera mounted to a portable device to detect hand movement in three dimensions.

Given that Apple say that Final Cut Pro has been rewritten from the ground up, it is very likely that it stores its information in a database that will be available to other applications and users. It is likely that multiple users will have access to the database at the same time.

That means new collaboration opportunities.

Sound

Given that the new interface is much clearer at helping users establishing and changing sync between clips or all kinds, it makes it easier for sound editors to work on the same timelines as picture editors. They’ll be able to do a great deal of work on audio sweetening (including fixing sync on clips) while the picture editors continue to work. For audio specifically this would work better if the position of one audio clip – a voiceover for instance – could define where other clips dipped their levels.

Editing

Collaboration works best when each user can easily understand which parts of a project they can have a look at and modify.
A suggested user interface showing that a compound clip is unavailable for editing.

Perhaps collaboration between editors will be afforded by ‘checking out’ compound clips on a master timeline. ‘Checking out’ is a database term that means an individual record will be locked while a specific person makes changes, but it still can be looked at, and other parts of the database can still be changed. In the case of Final Cut Pro X, the primary editor would be able to see a compound clip is being worked on by an assistant while temporarily unable to edit it.

As well as being a repository of a signed off B-Roll sequence, compound clips could also contain the verse in a music video, a scene in a movie or an episode in a web series. While I work on the verse/scene/episode, another editor could be repositioning the compound clip in the movie or even splitting it into two.

The Audition feature allows a given compound clip to display different choices chosen using a coverflow-type display. Perhaps instead of choosing between individual shots, in future editors might choose between different edits in a compound clip. Senior editors could use this feature quickly compare old and new edits by an assistant.

External suppliers

Maybe we’ll be able to give external suppliers access to chosen parts of our FCP X projects:

    Transcribers and translators (Given access to specifically-tagged ranges within clips, to which they’ll be able to directly add subtitling/close-captioning/alternate soundtracks)
    Picture graders
    Visual effects houses

We might even be able to choose which elements of our project are backed up on iCloud.

Other applications

Fun fact: If the Mac or iOS device is the dongle for Final Cut, then Apple loses nothing by allowing developers to create applications that have peer access to FCP X project databases – Core Data databases can only be served by OS X and iOS devices. That means if there is a market for applications that can edit FCP X data using a 20th century old-style (Avid/Premiere/FCP 7) UI, then Apple will allow third parties to develop them. The more applications that can manipulate FCP X project databases, the more high-margin iMacs Apple will sell.

Legacy tape & digital formats

There is a commonly believed rumour that Final Cut Pro X won’t have any DV and HDV tape capture features.

If Apple want metadata-based workflows to become more popular (preparing the way for more solutions to replace Unity setups at TV stations), it is likely that they’ll want metadata to work well with all media.

That means that if you import an FCP 6-7 project into X and spend the time adding metadata tags to people, places, shots, takes and regions, it is unlikely that Apple would throw that work away if someone decided to do a batch recapture in an older version of FCP and reimport the QuickTime files.

If an FCP X project is a database it is likely that if tape capture isn’t built into X, there’ll be another tool that will be able to add or change the database for those with tape-based content.

Batch recapturing doesn’t apply to tapes alone…

Given the multi-format editing nature of FCP X, the digitising tool might also be able to ‘re-link’ tapeless source data to projects that were converted from FCP 6 and 7. It makes sense that if you’ve used a H.264 or Red to ProRes 422 LT workflow in FCP 7, you could open the project in X to continue work after linking the timeline to source Red or H.264 data.

Please note that this post shouldn’t be counted as a rumour, just speculation: now that Final Cut Pro has been rewritten, there could be new opportunities to improve its collaboration features.

The latest Mac rumour is that Apple will announce a multitouch trackpad for desktop Macs.

For this new device to be useful, Apple needs to define how multitouch works when you don’t look at what you’re touching, but need to be accurate. At the moment, you can use MacBook touch pad gestures for a variety of commands (previous page, next page, scaling), but these instructions don’t require accuracy of fingertip positioning.

In order for us not to look at the ‘magical’ touchpad we’re using with our Macs, we need to know where our touches would land in the user interface of the current application if we touched the pad in a specific position. That means we can look at the monitors we already have, but still get the benefits of multitouch manipulation.

In August of 2007, Apple patented a multitouch interface for a portable computer that uses a camera to detect where a person’s hands are before they touch the trackpad or keyboard.

Illustration from Apple's 2007 Multitouch patent featuring a camera detecting where a user's hands are when not touching the trackpad.

Now that we have a device for detecting where our fingertips are, Apple need to update the UI guidelines to allow for multiple cursors (one for each fingertip) and let fingers not touching the trackpad still send a ‘hover’ message to user interface items.

For example, they could use filled and unfilled circles to show where fingertips are. Unfilled to show where fingertips are hovering over the trackpad, filled to show contact:

A screenshot from Final Cut showing a fingertip touching one edit and another hovering over a different edit.

In this Final Cut example, one fingertip is touching an edit, another is hovering over a different edit. To select more than one edit, editors hold down the option key and click additional edits. In a multitouch UI, the editor could hold down a fingertip on the first edit and tap the other edits to extend the selection:
Final Cut screenshot showing four edits selected

The hovering fingertip circles could also show the context of what would happen if the user touched. Here’s an example from Avid:
Mockup of multitouch UI extensions to an Avid screenshot.

Here the editor has their left hand over the multitouch trackpad. The index finger is touching, so its red circle is filled. As we are in trim mode the current cursor for the index finger is the B-side roller because it is touching a roller. The other fingers are almost touching. They are shown with unfilled circles with faint cursors that are correct based on where they are on the screen: the middle and ring fingers have arrow cursors, if the little (pinky) finger touches, then it would be trimming the A-side roller.

Once you can directly manipulate the UI using multiple touch points, you’ll be able to get rid of even short-lived modes. I wrote about gestural edits back in 2007.

Ironically, once Apple does provide multitouch UI extensions to OS X, then the concepts of hovering, ‘mouseenter’ and ‘mouseleave’ can be added to Flash-based UIs for those using multitouch devices. Oh well!

In which I provide a link that compares HD video web page embedding options.

The design of this blog means I can’t show how large HD video embeds can be on web pages you control can be. This (non-designed*) page on my site compares size and quality between YouTube and Vimeo HD video.

The maximum embed size for YouTube is 853 by 480 and the service is free. If you pay $60 for the Vimeo ‘Plus’ service, you get 25,000 HD embedded plays at 1280 by 720.

a_yard

A still from the HD video embedded on my site using both YouTube and Vimeo.

*I still avoid any HTML features more modern than tables

In which I show evidence that Apple hasn’t given up on Pro Apps, and suggest why they aren’t in any hurry to update them.

Given recent upgrades for other editing software, Final Cut users have been increasingly frustrated with a lack of news of updates from Apple.

Today’s announcement of a very minor update for Final Cut Pro (to version 6.0.6) will be the main topic of conversation on the web and user group meetings in the weeks to come (such as the London SuperMeet on Thursday).

Some are saying that Apple have given up on editing software, and want to spend more time being a consumer products company.

I think that is unlikely. It is more likely that Apple isn’t releasing a new version until it is ready. As they don’t consider any other editing software as competition, they are letting technology trump marketing this time around.

In the 90s I used to beta test Director for Macromedia. It was long enough ago that we would get a care package every fortnight with 15-20 floppy discs. These would unstuff to be a new version of ‘Spike’ or whatever the codename was for the beta of Director 4, 5 or 6 we were testing. Every time we thought the programming team had only a couple of months left to squash the bugs we’d pointed out (as opposed to needing to sort so much that they’d never get it done), they’d send us a letter saying “Thanks for your help, we release in three weeks; please find a T-shirt enclosed.”

Macromedia needed to release at the next Macworld, or NAB or Comdex or whatever. The bugs were going to be fixed in version X.0.1 or 0.2.

But what is the evidence that Apple is still invested in Pro Apps such as Final Cut?

Apple is still looking for people to shape the future of Pro Apps

If you go to jobs.apple.com and search using ‘Pro Apps’ as the keyword, you get four listings:

Software Development Engineer Posted 9 Jun ’09
Sr Human Interface Designer, Pro Apps Posted 13 Jan ’09
Sr Visual Interface Designer, Pro Apps Posted 13 Jan ’09
Video Editor Product Designer, Pro Apps Posted 18 Nov ’08

These job descriptions tell a tale: The features and user interface of the next version of Final Cut were locked in November 2008. While the beta programme and bugfixing continue, it was time to hire an editor who knows about software development to join the team. He or she would be the person with real-world experience to communicate with the programmers the ways people in post production work up until now.

They didn’t find anyone who was quite right for that job, so they created two new job descriptions based on the previous one, but each looking for someone with more formal human interface design training [“Degree in interaction design, human factor and/or visual design (or equivalent).”]

Those jobs are still open, but on the 9th of this month, they posted the job description for someone to continue to develop the software behind the Pro Apps documentation system. A good time to hire someone new would be once a load of documentation for Final Cut Studio has changed.

Apple is working with external plugin makers on developer mailing lists

Although they can’t comment on unannounced products, if you follow the postings, they imply that version 6.0.6 will not be the last version of Final Cut.

For example, someone from the Apple team wrote this:

Sometime in the last year or two, I surveyed FxPlug developers and asked about which features they’d like to see, and one that came out near the top was “create windows in the UI.” If this was a feature you were looking for, can you remind me what it is that you need from it?

Although this might pique the interest of Final Cut users, I wouldn’t advise wading through the mailing list for nuggets for future features. You won’t find anything specific – certainly nothing committed to or worth basing your plans on.

Software development isn’t like pregnancy. It takes a different amount of people every time. This time it has taken a lot longer because Apple have had a ton of work to do. The current assumption is that Final Cut has had to be re-written from the ground up. Code written back in 97 and 98 has to be junked to get rid of the rats nest of additions and modifications over the years.

What about new features? I want them now!

I’ve already blogged about a great feature to add to Final Cut Studio which wouldn’t depend too much on existing or new code. You can bet that any feature that extends and re-enforces the Apple Pro hardware and software ecosystem will get priority.

The place to contribute to Final Cut Pro 7.5 and 8, therefore, is on user group sites with a lot of history. If you go the the LAFCPUG forum, they have a sticky topic that’s been around for years: ‘FCP Feature Requests’. If you think you have an original idea for a feature, read all the posts there first. If it hasn’t come up there, add a post on the end…

I don’t think Apple have given up on Pro Apps. The only problem they (and we) have is that they don’t consider Avid and Adobe proper competition any more. Premiere will forever be associated with enthusiastic amateurism, and Avid has only just passed Final Cut 6 feature-wise (in the eyes of FCP users) – which isn’t good enough for people to switch. If Apple felt more pressure from them, maybe we’d get new versions sooner. Competition is the only thing that will make Apple move more quickly.

Remember: Final Cut Studio is to high-end Macs what iTunes is to iPods/iPhones. Why would a few million dollars a year in software development not be worth all that hardware margin?

In which I suggest that timestamps of live comments during TV shows could be used to replay them when you catch up with a live event

A few days ago I started to be wary reading blog posts and tweets. I’ve been following the modern version of Battlestar Galactica. I may have not seen every episode, but I’ve enjoyed the most recent series. My social media reticence has been down to the fact that the concluding episode was shown in the US a few days ago. That episode will be on TV tonight. I’m looking forward to seeing it, but I’m off to the London Bloggers Meetup, so I’ll be playing it back later.

In the past I’ve noticed that big fans of some TV shows like to message others while the show is on air. They post messages to forums, tweet, add comments to blog posts in real time – as the episode unfolds.

I’m not that much of a fan to have my eye on a computer screen while watching a great story, but I sometimes like to check out what people wrote once the show is over.

That prompted me to come up with an idea – it might be interesting to be able to follow people’s comments (audio as well as text) if they were optionally integrated into the stream – synced so that they were available at the time they were originally posted.

It would be like having informational subtitles or commentary tracks to a film or TV show – but they could be created by anyone anywhere. If I was selling content, I would provide an option to subscribe to alternate streams (of any content).

This follows on from my post on using a simple tag to extend HTML to allow overlays on everything.

Create a live chatroom with a single click at TinyChat.

In which I remind you of Apple’s concept videos from the 80s and suggest it is time for a new one.

In 1987 and 1988 Apple were still facing an uphill battle with businesses when it came to convincing them that graphical user interfaces were better than MS-DOS command-line interfaces. Part of their campaign to show that the Mac way of doing things was the start of the future of computing was to create speculative videos of how computers might evolve in ensuing years.

The Knowledge Navigator video was set in the far off year of 2010. In 1987 John Sculley suggested that if Apple defined an idea future for the Mac, it would be more likely for that future to happen. Commentators have theorised that Moore’s Law, the prediction for the rate of improvement of the amount of computer power at a given price, has galvanized technologists to do all they can to do better than predicted.

The year isn’t stated in the video, but the figures presented only run up to 2009, so I’m guessing this is set in 2010.

It is interesting to see how close we are to this kind of interaction with our technology. In 2003 Jon Udell revisited this video and commented

Presence, attention management, and multimodal communication are woven into the piece in ways that we can clearly imagine if not yet achieve. “Contact Jill,” says Prof. Bradford at one point. Moments later the computer announces that Jill is available, and brings her onscreen. While they collaboratively create some data visualizations, other calls are held in the background and then announced when the call ends. I feel as if we ought to be further down this road than we are. A universal canvas on which we can blend data from different sources is going to require clever data preparation and serious transformation magic.

Last week Stephen Wolfram announced that his next project is an online system that can take your natural language questions and compute answers for you. That reminded me of Apple’s Knowledge Navigator. I imagine it will be able to answer questions like:

“Is there a link between the size of the Sahara and deforestation on the Amazon rainforest?” “What if we bring down the logging rate to 100,000 acres a year?”

It’ll be a while until we have foldable screens, but it seems that if WolframAlpha can be made to work, we might be closer to the Knowledge Navigator, or what computers should be doing for us anyway.

In 1988 Apple made another video, one that is less famous, but much more accurate in its predictions. Which is another way of saying, if we were to make a video today about 2020, this is what we’d be predicting right now.

A OK quality video can be found at http://www.mprove.de/uni/asi/futureshock.html

…or you can stay here and watch it encoded for YouTube:

You’ll see that some of the ideas are still be speculated today.

Microsoft especially likes the idea of real objects interacting with technology (as used in their Surface product). Microsoft has a video set 10 years in the future. It starts off with some impossible to implement stuff in a classroom, but continues with some good ideas:

(about that classroom, it’s all very well having augmented reality ideas (overlaying graphics onto the real world), but they only can work when there’s an audience of one – the display needs to take account of the position of the viewers eyes to line up the graphics in the right place. The kind of classroom telepresence shown at the start of the video would only work for one kid in each classroom at a time. For everyone else, the display would look odd and distorted. For more on this, see an older blog post.)

A much more realistic and specific Microsoft video was made in 2004, and set in 2010. You’ll see that their estimate of what we’ll be able to to in 20 months time:

On the subject of speculative videos, maybe we should start thinking of one for the creative industries. If collaboration is what makes TV and movies so satisfying, how will technology support media production in 2020? Or is the ultimate aim for 3D movies to spring out of people’s heads fully formed?

%d bloggers like this: