…I mean second generation iPod Touch. The only reason I’m not buying one today is that the capacity is too low for me. Steve says that the best selling iPod is the Nano. The spec of the Nano shows that most people are happy with 8GB of storage for their music collections. They don’t need any more. That’s why the Touch only comes in 8 and 16GB.

My music libary is too large for my 60GB iPod, so I’ll bide my time until Apple release a Touch for people with large media collections…

…a version with a 160GB drive.

Facebook. I’m seeing it as a test of insecurity. That says a lot about me. Why have a map of the world that shows where I’ve been? What should I think that says about me? How many friends would I like to have listed on my profile? What if my friends think I don’t have ‘enough’ or have ‘too many’?

There’s an article about Facebook’s definition of friendship on First Monday.

Then there’s the issue of meeting up with long-lost school friends and workmates.

When I find people I haven’t seen for twenty years, what do I say to them? When they ask me how I’ve done over the years, do I need to exaggerate? Facebook can be like a school reunion every time you connect with a name from the past – you know how bad they can be…

Norman Hollyn thinks that the war is over.

Bill Gates said that HDVD and Blu-Ray would be the last hardware formats. Looks like he was almost right. It’ll be DVD. When I got my first DVD player I explained it to my VHS owning friends as a CD for movies. You don’t have to wind through stuff to jump to the bit of the film you want. You can also have different soundtracks if you want to watch a foreign film in English. Also watching a DVD many many times does not damage it.

I never mentioned the picture quality. I didn’t talk about the extras (what few there were in those days). I knew that most people weren’t interested in that. They liked the convenience and the resilience. They knew how to deal with CDs. They’d been through the transition from vinyl.

In the UK, 100% of TVs that are sold have a 16:9 aspect ratio. I would guess that the installed base for 4:3 TVs is down to less than 40%. We don’t associate widescreen TV with HD. It’s just the norm. This happened over six tears ago when the main commercial channels mandated that all commercials should be provided to the network with an aspect ratio of 16:9 with a 14:9 safe area. This meant that people in 4:3 TVs would get a little matting with a 14:9 image on a 4:3 screen. 16:9 TVs would get adverts and programmes where the action and titles were limited to the 14:9 centre of the screen.

We also have a slightly higher resolution in PAL than NTSC. NTSC has more frames per second, so it has slightly smoother motion and less strobing (unless the picture was orginated on film. There’s less demand for HD broadcasting in the UK.

A year or so ago, I was back stage at a conference. I noticed that the majority of the crew were using up the time between video, sound and lighting cues by using their computers to browse YouTube. They needed an unlimited amount of short video clips to fill variable chunks of time. These are the sort of people who buy the latest widescreen TVs. Some have HD cameras. They spent their time watching low-res flash video.

I think they’d agree with me: HD is for aquisition. For consumption, SD is good enough. 1080p24 for production. Eventually 720p for consumption, but not for a while.

Over the weekend a friend of mine brought over some footage that he wanted to review with me. This was from one of those ‘say let’s have the party right here’ moments he was part of a few months ago. He was on location hanging around with his part of the crew with a couple of days off between shooting days. The gang dropped in on a friend and after a few hours of R&R, they came up with an idea. They had lights, cameras, sound equipment, a good location and talent. Why not shoot a quick short? Actor friends were called, they turned up.

The film was a single scene short with a twist in the coda. The single scene required a group of five to ten actors to improvise on a theme for a while. Then one would deliver a line in the coda that would change the way we saw the previous scene.

So, Saturday was the day to review the seven tapes that were produced that day. 18 months after the shoot. The people involved are busy people, it took time for my friend to ask me whether I could help out. That meant that memories of what was shot, what worked and what might not have worked had faded a while ago. At least he would be seeing the footage with an eye that was almost as fresh as mine.

It turned out that there were problems with what they had. As the sun went down, the many windows in the location turned into mirrors. Some of the lighting was visible in the reflections. Maybe that could be matted out. We had problems finding out which tapes had the production sound. It was a three camera shoot, so the recordist connected their mixer to one of the cameras. It was the one on a locked off wide, but the levels were so low that dialogue could only be heard clearly when my amp was at 10. It’s usually set somewhere between 2 and 3.

But the main problem wasn’t in the sound and picture. It was the direction. The scene was that the actors are stuck in one location for an hour or so. The improvisation they came up with followed the initial direction well, but it didn’t lead anywhere.

Each actor was told to come up with a character on their own and reveal it during the improv. The actors were then filmed non stop for 50 minutes. The tapes were changed, some notes were given and they went at it again for another 30 minutes.

As my friend watched, his estimate of how many useful minutes of footage we might get reduced as time went on. ‘We should be able to make 15 minutes out of this,’ was his initial estimate. This changed to ‘hopefully we might be able to salvage 5 minutes’ then onto ‘I don’t think we’ll be able to get 30 seconds’.

I think that my friends will learn from this. Being amongst film crews is very different to leading the crew and developing the story with the actors.

I suppose the next time they’ll have more plans of how to direct the actors to produce work that can be used. I would suggest a non-camera rehearsal to discover what characters the actors had come up with. Then the group could be directed to improvise for five minutes – such that person C wants something, with person E resisting. A, B, D and F ending up supporting one side or another or abstaining from the conversation. After some sort of resolution, the director could come on and choose new protagonist and antagonist and a new target. After another five minutes the director and group can review where that led. They then have the choice to explore futher along that line, or to redo the segment to see if it leads somewhere else.

If they developed the story this way, they would have ended up with a series of clear beats between people. Mini-scenes that could be extended, modified or removed. Stuff that could have given the director a choice about what to include and what to omit.

And if that didn’t work, they would have learned from that and gone on to do better.

‘Pain is the best teacher’

I was listening to an edition of Creative Planet’s Digital Production Buzz podcast where they talked to some people from DTS about digital restoration work (32 min 39 sec in).

DTS have a system that has enough maths in it to calculate what was originally captured on the film negative. The work isn’t done by eye, by expert restorers. This system understands the maths of lenses, emulsions, stocks and film grain to calculate what detail has gone missing over the years.

This runs counter to the a blog post referred to me by Jean P. It covers what information about our current movies will make it into the future for scholars to research the 20th and 21st centuries.

The element of he DTS story that is relevant to today’s production is that they can use the same restoration technology to solve problems that are all too common right now. They talk about recovering footage from footage that is processed incorrectly and even dealing with footage that is shot out of focus! They also can increase resolution from SD to well past HD…

Their system is not real-time, but it can deal with digital files as well as celluloid. This isn’t a photochemical process. They apply mathematical equations to digital files. They say they are solving digital signal processing problems.

One of their secrets is using information from adjacent frames to gather information about the current frame. Even when a camera is locked off individual film grains or digital pixels receive slightly different light information from frame to frame. Very few shots have cameras that are locked off completely solidly. Cameras move. Subjects move. The differences between frames provide the extra resolution. They take this information to understand what light would have been captured between the film grains and digital pixels. They use that information to give more detail for every frame.

This means that your old Hi-8 or home DV footage may hold enough information to be scaled up to HD or better in the future. Now this software is only available from DTS. One day we’ll have the software and processing power to do this for our own footage.

At last we’ll have technology in the real world that will be able to do what they do in movies and TV shows like CSI and other police procedurals: select a small part of a video image and press the button marked ‘Enhance!’

My tip for security camera designers: improve the number of pictures you take a second at the current resolution for signal processing tools to be able to reveal details never seen before.

Robert X. Cringely thinks that the next technology killer app will telepresence. That’s this year’s name for video conferencing.

HP have a system that costs $300,000 to set up with fees of $18,000 to operate. You get a special room with HD displays and cameras, a fast internet connection and support. HP developed this with Dreamworks. It was designed to support the post-production process.

Mr. Cringely goes on to suggest that Apple might sell home-telepresence as their next consumer killer app. iMacs and portables have screens bulit in at the moment. Apple have the software and marketing expertise to sell the idea to the general public.

The thing I miss about full-time work is regularly spending my days with people I like. Chatting about random subjects. Giving them feedback about their work and lives. Telepresence will work for many more people when it gives us the social element of working with other people in person.

I think that the key to telepresence, making working from home much more like being in the office would be using a dedicated screen and the addition of a second camera. The first camera would be in a dedicated screen next to your main monitor. You could even use a autocue/prompting-type mirror to line up the person on the screen with the camera that’s watching you. This would be the normal personal interaction camera. I think having a seperate screen would help people talk to each other more comfortably. People don’t usually look directly and continuously into people’s eyes as they talk. They like to cutaway to other things in the room.

A second camera would emulate the non-verbal negotiation we do when we decide whether the person we’re looking at is in a state where they can be talked to. This camera would be positioned at 90 degrees to the conversation camera, behind the user. This would be framed as a mid-shot – showing the person from the waist up. This is the sort of view you get when walking past someone’s office, or looking over at their desk in an open plan office. The view that helps you see whether someone is free to talk. Also the view that can tell you if they have time for general chat, or are only free to talk business.

Once we have those extra channels of information, it’ll be a great deal easier to work from home because you won’t be missing your friends at work.

…is to spend less money than you’ll earn!

My visit to the Cannes Film Festival last year taught me that below the line is where I want to be. For the forseeable future.

I read a table of figures in the Hollywood Reporter that listed the going rates for selling all rights for movies in non-US markets in 2005-06. Here’s a excerpt:

Going rate for all rights in non US-markets in $000 for a film budgeted at $3 million
Hollywood Reporter, May 2006

France 160
Germany/Austria 300
Greece 30
Italy 250
Netherlands 80
Portugal 40
Scandanavia 225
Spain 150
UK 200
Europe total 1435
Australia/New Zealand 75
Hong Kong 25
Indonesia 30
Japan 300
Malaysia 25
Philippines 35
Singapore 30
South Korea 275
Taiwan 100
Asia/Pacific Rim total 895
Argentina/Paraguay/Uruguay 40
Bolivia/Ecuador/Peru 20
Brazil 100
Chile 25
Colombia 20
Mexico 100
Venezuela 20
Latin America total 325
Czech Republic/Slovakia 50
Former Yugoslavia 15
Hungary 60
Poland 75
Russia 175
Eastern Europe total 375
China 40
India 40
Israel 15
Middle East 20
Pakistan 10
South Africa 30
Turkey 60
Others total 215
All non-US markets 3245

For example, if you have a movie that had a budget of $3 million, then selling the distribution rights in the UK would get you an average of $200,000. If you sold it to every country in Western Europe, you’d get $1.4m. Eastern Europe brings in $375,000. China pays on average $40,000 for distribution rights! This adds up to $3,245,000 for all non-US rights.

Not very much.

The US rights usally get you four to five times as much as the UK rights. That means another $900K on top. So if you get US rights and sell rights to half of the world, you’d get £2.5 million in total. Selling the rights to half the world is the most you can reasonably expect. Then you need to factor in your sales agent (aka producer’s rep). They charge 15%-25% to close the deals at film markets like Cannes. It also takes a long time for this money to come in as deals are done. You get your first money 6 months into the process, as the major countries are sold, your income slowly falls to nothing for another two years.

But what are you selling for your $200,000 UK rights? If you are a new producer without much clout, you are selling everything. The UK distributor can show it in as many cinemas as they like, press as many DVDs as they want and get whatever they can for cable, satellite and TV showings. You get none of that. They get to exploit your film for seven to fifteen years for that one-off fee of $200,000!

These are the figures for average deals for new producers. I suppose the trick is to have an above average film – and be a producer with more experience and clout! That’s why I like being a line item in a budget that is purely based on a weekly rate and number of weeks worked…

In the UK there have been a series of TV scandals. Scandals that pundits say have ‘eroded the public trust in television.’ These scandals range from people entering phone in competitions on premium-rate telephone numbers with no chance of winning to a documentary about Alzheimer’s that implied that the subject had died on camera when they went into a coma that lead to their death three days later.

The latest kneejerk reaction has been that a UK TV channel has banned the use of a majority of the cutaways regularly used in TV news.

At the moment news programmes are peppered with what Five’s news editor David Kermode describes as “rather hackneyed tricks”. He’s referring to interviewer ‘noddies’ and question asking shots that are recorded after the interviewee has left the scene. He also is banning the generic silent shots of interviewees walking down corridors and walking into offices. These are the shots that reporters usually add a voiceover to to provide story context. Kermode calls these “contrived”.

He said viewers “have a pretty good grasp of what an ‘edit’ is, so I think the time has come to be honest about signposting when we edit our interviews”. That shows that he doesn’t understand the need for storytelling techniques in communication.

This ban is supposed to restore viewers’ trust in TV news… However I’d be surprised if any viewers have noticed any of these ‘tricks’. They’ll notice interviews made up of interviewee shots crossfading from clip to clip.

It’s surprising that the general public’s trust in British TV has lasted this far into the 21st century. I think that TV companies should forget about trying to regain that trust with empty gestures and get on with making good TV.

The BBC’s Newsnight show is asking viewers whether this ban is a good idea. There’s a good set of responses to that question on their website.

Earlier this month an Apple patented a multi-touch interface device for a portable computer. One illustration shows a camera above the screen that can detect people’s hands over a wide trackpad:

Apple’s multitouch laptop patent image

This means that until every screen that we have is touch sensitive, we’ll have touch devices that can recognise multiple fingers at the same time that will manipulate things on a seperate screen. They’ll have the same feature that the iPhone has – they’ll be able to detect fingers that haven’t quite touched yet. The advantage of that is we can choose where we touch before committing.

Following on from the previous post, fingers that hover could be shown as unfilled circles, while fingers that are touching would be filled transparent circles.

Four fingers on a multitouch control using Avid software

In this example, the editor has their left hand over the multi-touch device. The index finger is touching, so its red circle is filled. As we are in trim mode the current cursor for the index finger is the B-side roller because it is touching a roller. The other fingers are almost touching. They are shown with unfilled circles with faint cursors that are the correct based on where they are on the screen: the middle and ring fingers have the arrow cursor, if the little (pinky) finger touches, then it would be trimming the A-side roller.

Looks like it might be possible to come up with user interface extensions that let us use new interface devices with older software.

Here’s how a multi-touch interface might work when refining edits. In these screenshots, fingertips are shown as semi-transparent elipses. When a fingertip is detected above the surface but not touching, it is shown as a semi-transparent circle. I’m using FCP screenshots, but this could also work in this way in Avid.

Firstly, you could select edits by tapping them directly. If you want to select more than one edit, you could hold a finger on a selected edit and tap the other edits:

Tapping edits. Hold down one and tap the others you want.

The edits selected:

The edits selected.

With edits selected, you can then ripple and roll using two fingers. In the example below, the left finger stays still and the right finger (on ’14 and 13 skin’) moves left and right to ripple the right-hand side of the edits. The software could show which side of the edit is changing as you drag the clips to the right:

The moving the right finger will roll the right-hand side of the edit.

If you want to move the left-hand sides of the edits you’d move your left finger and hold the right finger still.

If you wanted roll the edit, you could use a single finger to move the edits left or right:

Using a single finger would roll the edit.

If you wanted slip a clip, you could select the edits on each end of the clip:

Preparing to slip or slide a clip.

The way you use your two fingers defines whether you do a slip or a slide. Which ‘rollers’ get highlighted show which kind of edit you are performing. If you hold an adjacent clip with one finger and move the finger in the middle of the clip, you get a slip edit (the clips before and after would stay the same, the content within the clip will change):

Slipping a clip by holding an adjacent clip

If you only use one finger to move the middle of the clip, you get a slide (the content within the clip will stay the same, it will move backwards or forwards within the timeline, modifying the clips before and after):

Sliding a clip by moving the middle of the clip

It doesn’t take too much to create gestures for other edits…