I’m not posting much of substance today apart to point out that I’ve added a comment and footnote to the post about YouTube’s HD service and to remind people:

If you don’t have a backup plan, make a backup of all your files now. Then create a backup plan and stick to it.

The HD on my parents’ computer failed today (during a backup). Hence this message.

As Matt Davis said in a recent presentation: ‘A file only exists if it is in more than one place’.

Sometimes working on low-budget projects means leaving the initial edit to others. Researchers sometimes already have plans for the footage. Producers might not have the funds for all your editing time.

That means preparing the way for a paper edit. You send the footage to someone else who sends you back an initial edit as a document listing a series of timecodes indicating what footage goes where:
[01:47:22]-Mr. Thomas: "That's when we decided to extend the name of the club using the initials of the new members..." -[01:48:12] " to who the second 'A' was."
[02:12:01]-Mr. Yankson: "I thought it a demotion..." -[02:12:56] "...the Muddy Lawn."

They usually refer to hours minutes and seconds, because the specifics of frames don’t apply to paper edits.

How do they know what timecode to put in their documents?

The simplest option is to use QuickTime player. The time shown in the bottom-left of the window usually shows the number of minutes and seconds counted since the start of the movie:

If you move the mouse over this counter, it changes into a pop-up menu where you can choose to display the source timecode of the movie (the timecode used within Final Cut, Avid or Premiere):


If the person who is doing the paper edit refers to this time, you can use it within your editing software: “Use the third time the guard opens the cell door [from 36:28 until 36:42]”

If you are not sure of whether your collaborator will have access to QuickTime player, or if they require footage in another format, it is better to add timecode to the video itself:

This is known as a “timecode burn”.

The most straightforward way in Final Cut Pro is to use Andy Mees’ Timecode Generator plugin. Before you add it to your timeline, enter a value for duration at least as long as your timeline:

Download it from his page (by clicking the screenshot at the top of the page).

Another option is to use a separate application to add a timecode burn to your movies: QT Sync. It was originally created to fix QuickTime movies whose audio and video are out of sync. It has an option to add timecode to existing movies. This can be useful if a movie takes a long time to render, and you want a version without a timecode burn:

Today I presented at what used to be called the AppleExpo here in London. I spent a few minutes telling people about my plugins (links in my Final Cut Pro page) as part of MacVideo Live.

Jonathan Harrison gave a presentation on interview lighting that reminded me of a principle useful in editing and post production as well as lighting, camera and shot setup:

Three things attract the viewers attention in a given shot:
1. The thing that is moving
2. The brightest thing
3. The sharpest thing

Of course the DoP suggests that getting this right in front of the camera is best, but videographers, compositors and editors don’t always have the luxury of perfectly captured footage. At least we have the software technology to control what is seen to be moving, how bright parts of the frame are and what objects are in focus and which are out of focus (Once we know where the audience is looking, there is no need to worry about continuity where the audience isn’t looking).

Use these tools to direct the view of the audience without them knowing. We need to use the tools of visual storytelling to help the people we are telling our tales to forget they are being told a story.

In an extensive interview at Variety, James Cameron has a lot to say about 3D production, but he also mentions the paper tiger that is 4K resolution for movies:

4K is a concept born in fear. When the studios were looking at converting to digital cinemas, they were afraid of change, and searched for reasons not to do it. One reason they hit upon was that if people were buying HD monitors for the home, with 1080×1920 resolution, and that was virtually the same as the 2K standard being proposed, then why would people go to the cinema?

He suggests that instead of having 4K (a 4096×3112 frame) 24 times a second, it’s better to go for 2K (2048×1536) 48 times a second. This would reduce the motion artifacts seen at 24 fps. ‘Motion artifacts’ most often happen when the camera pans too quickly – a juddering effect when 24 frames every second isn’t enough to show all the detail we would normally see if we turned our heads at the same rate.

[For those of you who are used to 2K being 1920×1080 and 4K being 4096×2160, I’m referring to the resolution of the full 35mm frame, which is cropped down for different aspect ratios when projected. Wikipedia has more on this.]

Artifacts also occur when objects such as cartwheels and hubcaps have detail that rotates at a rate that is close to 24 frames a second – as a picture is taken every 24th of a second and the pattern looks very similar avery 24th of a second, it looks as if the pattern hasn’t moved far and that the wheel is moving very slowly even though the cart or car is moving quickly.

If regular patterns are close the frame rate, you get strobing. The upper wheel is moving three times slower than the lower wheel.

The spokes in the lower wheel are moving so fast that they rotate almost as far the distance between two spokes, which makes it seem as if the spokes are moving backwards. You can see from the broken spoke that the wheel is still moving forwards.

Juddering pans and strobing wheels still occur at 4K. 4K gives us a more accurate representation of these effects. 2K twice as often will reduce these effects a great deal.2K at 48fps is better than 4K at 24fps. Temporal resolution is more important than visual resolution. This is why interlacing has survived into the digital era – those who want to show sport insisted in rates of 50 or 60 frames per second for their broadcasts. Due to bandwidth limitations, the would rather have half the vertical resolution (1920 by 540) twice as often.

Another advantage is the data rate for storing and communicating the footage would be less: 24 times 4096×3112 is 306 million pixels per second whereas 48 times 2048×1536 is 151 million pixels per second.

Cinema owners may have to let go of beating in-home systems using visual technology, they’ll have to concentrate on the architectural and social elements of a big night out at the movies.

If 2K at 48fps is adopted the post process will need to produce content that can be generated at both frame rates. 24fps has been a standard for so long that it will take years for projectors around the world to be replaced with digital projectors. As it costs $1,500 to produce a film print at 24fps, the sum will almost double for 48fps. With reel breaks happening twice as often, film projection at 48 fps isn’t worth the benefits of providing extra temporal resolution.

This isn’t that much of a big deal for editors. If we treat the extra frames per second in the same way we (used to) deal with interlaced footage, there shouldn’t be too much of a problem. Timecode can stay the same. We’ll stick to making edits only on 24ths of a second. If a 48fps movie is being mastered, it’ll get a bonus frame at the end of each shot. We’ll probably edit away at 24fps for now. Once the edits have been agreed on, we’ll be able to watch at 48fps to see if any moments added at the end of a shot are undesirable. We can then move the edit back a 24th of a second if need be.

It won’t take too much effort for Avid, Apple and Sony to add features to enable 24/48 fps workflows in their software. The sooner they do, the better the fidelity of the movies we make.

Yesterday, the Hollywood Reporter announced that Avid are researching ways to making their products work with 3D footage. I would characterise the kind of footage they mean as being ‘2.5D’ – two cameras shoot simultaneously from slightly different positions to simulate human stereoscopic vision.

The article refers to the ‘Over and under’ 3D technique. In the days of film, that meant that each frame of celluloid had two slightly different images – anamorphically squeezed so one appeared above the other. These days it probably means that each moment in time is represented by two pictures in a single file, i.e. at 01:04:25:16 in the media file there are two images – one for the left eye, one for the right.

Avid’s current plan is for editors to edit away in 2D – only displaying what one of the two ‘eyes’ would see in the scene. Every once in a while, they could choose a special command that lets them review the cut in 3D.

Editing 3D will only become mainstream once the price of the camera systems come down. The Fusion system uses two Sony F950s (so that’s over $230,000 just for the cameras). There is a system that 21st Century 3D have developed, but it isn’t for sale. They’re going the Panavision way and only making their technology available via hire – with mandatory employment of their staff to go along with the kit. They’ve taken a couple of Panasonic HVX100 SD cameras, synced them together, added 4:4:4 direct to storage recording and combined them in one 24lb package:
3D camera only available for hire from 21st Century 3D

Funnily enough, they also require that they are in on the editing of your production too. From their FAQ:

…there is more to the editing process than just matching all your cuts. It is also important to note that our 3DVX3 camera system records RAW CCD data that must be converted by 21st Century 3D in order to be edited in standard NLE software. 21st Century 3D does work with our clients who want to edit their own videos by providing 2D window dubs that you can edit. Send us your Final Cut Pro project file, an EDL or the window dub edit and we will conform your 3D show.

Can someone from 21st Century 3D come to my office and show me how to edit 3D videos?

Unfortunately no. 21st Century 3D utilizes techniques that are in some cases proprietary and have been developed over the course of years.

I suppose you could do it with multicam mode when editing, place the sequence in a 48p sequence to view in 3D using a fxplug scripted plug-in.

I’m surprised that companies such as 21st Century 3D think that it is possible to keep post-production secrets. I doesn’t sound like too much of a challenge to me, but maybe I haven’t thought it through. I wonder if the aesthetics of editing 3D can also be kept secret too. People thought that editing with the Cinemascope 1:2.35 required a new visual language.

21st Century 3D believe that the best results come from having a large depth of field. They want to give the audience the choice of what to focus on. I think that cinematographers and editors have spent the last 100 years using depth of field and focus to direct the audience’s view. We should have a good idea of which part of the frame they are looking at. That determines the timing of the next shot – we need to know how long it takes for the audience to notice the edit and then search the new shot to find the most interesting thing to look at before we let new information be conveyed (a person’s expression changes, a bomb starts ticking). If we still can use framing, composition, sound, a shallow depth of field and focus to direct the audience’s eyes, we may need to take account of how much longer it takes for people to find what we want them to look at if they are looking at 3D footage.

What else determines how we’ll be editing 3D footage?

Over at Norman Hollyn-Wood, Norman wrote about how directors aren’t usually the right people to edit their films. Scenes aren’t usually the problem. It’s structure.

If you write, shoot and direct your film, you sometimes cannot keep the version of the film in your head that actually exists. You remember what you planned. You remember the versions you liked, the versions that the studio liked. You want to believe what you hoped for is there on screen.

The one time that I saw Robert Rodriguez’s “Once Upon a Time in Mexico,” I couldn’t understand what was going on. I remember repeated quick-fire exposition scenes. The plot seemed complex, and I’m usually the one that friends turn to to explain what was going on. Rodriguez may know how to edit a scene, but he was too close to the film to make the structure work. I think he thinks that there are scenes in the film that the rest of us never saw. He can understand the plot because he wrote the backstory and many unused scenes. I didn’t have access to any of that.

The editor is the one who’s job it is to keep track of all that. It is their skill to watch the film each time as if it is the first. The problem is education. If you think that teach people how to edit scenes is hard, just think about trying to teach people how to maintain the structure of whole films.

There are some director-editors who can watch their films as if it were the first time. I think Kevin Smith is a good editor for structure. That comes from his writing ability. He is a writer first, an editor second, and a director third. Not a bad order for the genres in which he works.

Scott Simmons has tackled the subject of the lack of post-production knowledge in up-and-coming editors in an article at studiodaily. It is couched in terms of ‘What’s wrong with the young FCP editor’ because ‘the young FCP editor’ is the current definition of the next generation of editors.

He enumerates the many technical failings of editors he has been coming across recently. There has been a lively debate in the comments section on that same page. I think the answer to his point is nothing to do with editing or technology. I think that the more people enter a field of endeavour, the more likely you will come up against the different natures of the way people approach problems.

I know it was some U.S. politician who came up with the following, but it makes sense nonetheless: it is the distinction between ‘known unknowns’ and ‘unknown unknowns.’ Some people find out the minimum required to get the job done. Others understand the wider context and have a framework in which to place new knowledge. The first people to attempt to learn how to edit/shoot/write/fix cars/do DIY are those who put the time in and understand to some extent the magnitude of the job that they are taking on. As tools are developed to make the job easier for more people to have a go, the second group get involved.

All the lack of knowledge that Scott was pointing out was in the technical aspects of editing. I argue that technology isn’t editing. Technology is for assistant editors. These days budget restrictions mean that editors don’t get the opportunity to be assisted as much as they used to, but I think that editors should know when they are assisting the edit and when they are editing.

My current definition of assistant editor is the person who creates the environment in which the editor can edit. Why should today’s editors learn new technologies in the coming years? If they are well assisted, the environment in which they edit may be implemented in a different manner, that isn’t the business of the editor. They need to find people they trust to work with. They can concentrate on evolving the art of the edit, not on the evolution of technology.

So, in this case I think Scott is talking about new editors who can produce programmes on tape, disc or online that may seem well edited to audiences, but those who have a deep understanding of the post-production process know that the technical knowledge was weak. They need to be assistant editors as well as be editors. Hopefully, once their artistry matters more than the technological understanding, they’ll be able to forget about keeping up with technology and trade their storytelling knowledge with the next generation of assistant editors.

This link is nothing to do with the Hallmark holiday that’s coming up. It’s a coincidence…

I know that when two people talk to each other in the movies, they stand much closer than real people stand, they don’t look where real people look, but it is still a good idea to notice the way real people act. We need to understand the non-verbal cues that communicate character and story. At, there is a presentation on flirting. Most of it is the same old Cosmo advice (you can certainly ignore almost everything after slide 28), but there is stuff in there for editors.

The presentation ‘slides’ are more like pages, so if your monitor is any smaller than 1200 pixels vertical resolution, you’ll have to read the text from the notes at the bottom of the page (which is the same as on the slides).

For example, it is a good idea for editors to follow eye-trace… We need to make sure that what actors are thinking and feeling is revealed by where they look. Even if it is for a few frames in a shot:
Excerpt from slide 8:

Once a conversation begins, it is normal for eye contact to be broken as the speaker looks away. In conversations, the person who is speaking looks away more than the person who is listening, and turn-taking is governed by a characteristic pattern of looking, eye contact and looking away. So, to signal that you have finished speaking and invite a response, you then look back at your target again.

Excerpt from slide 23:

The essence of a good conversation, and a successful flirtation, is recipro-city: give-and-take, sharing, exchange, with both parties contributing equally as talkers and as listeners. Achieving this reciprocity requires an understanding of the etiquette of turn-taking, knowing when to take your turn, as well as when and how to ‘yield the floor’ to your partner. So, how do you know when it is your turn to speak? Pauses are not necessarily an infallible guide – one study found that the length of the average pause during speech was 0.807 seconds, while the average pause between speakers was shorter, only 0.764 seconds. In other words, people clearly used signals other than pauses to indicate that they had finished speaking.

You’ll find all this in well-scripted, well-directed, well-rehearsed and well-acted rushes. However, as we editors are in the business of solving problems, it’s a good idea to have some social psychology resources to turn to – just in case.

We also might be able to make that connection that gets us the job in the first place too…

Twenty years ago US network TV was edited on film, yet the deadlines were as scary as they are today.

Check out this interview with the post supervisor on Moonlighting. They were shooting on Monday morning for a show that went out on Tuesday night:

But this editor said it was the strangest experience for him because he came into work, edited all night, went home, went to sleep, woke up and turned on the TV, and it was on the air. It was as though the network had just plugged a big cable into the back of the moviola.

There are many other interesting things in this article, including the use of music, stunt doubles, ordering music by the yard and using the emotion of a scene help the editor know who to to favour. Also there is a great deal on the editorial choices in the making of Boston Legal.

When I saw Transformers in the summer I had a good time. It was funnier than I expected, and my total lack of interest in the franchise up to that point wasn’t a problem. The effects were very good – I liked how grainy some of the shots were. It made the placement of the alien robots look more realistic. During some of the action scenes at the end it was sometimes difficult to follow who was punching who. The 20 metre wide image was too large. I knew that I would have a better chance to see what was going on later.

To provide some in-train entertainment on my trip over to Paris of Christmas, I copied the Transformers DVD onto my iPod. It was good fun still and passed two hours without a problem. During the action scenes there was a little too much detail for me to catch. The 4 cm wide image was too small.

A few days later I showed my father Transformers on my 1.2 metre wide TV. Like the porridge in Goldilocks’ emergency accommodation, the screen size was ‘just right.’

This shows that modern movies are designed for the home. The cinema release promotes the more profitable DVD. The shared experience of watching the film with a large audience made the emotional feedback stronger, but the spectacle worked better at home.

For now I’ll edit for the 20 metre screens…

%d bloggers like this: