Archive

film making

Once post production work has to be done by more than one person at a time, the speed of the network becomes an important consideration. This is because it is much more efficient if editors, motion graphics artists and colour graders can get access to the same video source files and most up-to-date edits. This is done with shared storage connected to computers by a fast network.

Also for many years people have connected multiple computers together to perform complex tasks. In post production more and more computing is being done using advanced GPUs. Multiple computers combined to perform complex tasks together are known as render farms. The faster the connections between the computers, the better.

Current Mac Pros can have PCIe network cards installed, and those cards can be used with Thunderbolt-equipped Macs using an expansion chassis. However other Macs don’t have fast network connections built in and can’t use PCIe cards.

According to FAQ-MAC a feature of Apple’s forthcoming Mac OS X 10.9 Mavericks might allow many more Macs be used in simple render farms: IP over Thunderbolt.

They showed a dialog box (which they may have mocked up) that shows Mavericks asking whether a newly attached Thunderbolt cable should be used as a network connection:

captura-pantalla-2013-06-12-s-125806_39540_640

Detected a new network interface:
   Thunderbolt Bridge
Check that it is configured correctly, and then click Apply to activate.

Internet Protocol over Thunderbolt means that you can connect Macs via Thunderbolt cables and use the Thunderbolt cable as a network connection. Thunderbolt 1 connections have a theoretical maximum transfer rate of 10 Gb/s – which is similar to the speed of 10 gigabit Ethernet, which is a popular post production networking standard.

I assume IP over Thunderbolt is less efficient than a dedicated Fibre Channel PCI Express card, but at least Thunderbolt is available on a wide range of Macs.

With a little distributed rendering, my 27” iMac connected to a pair of Thunderbolt equipped Macs will get through QuickTime encodes much more quickly

Also if I need to share 4K proxies with others, Thunderbolt over IP is good news.

Given that 3D is dying, the next great hope for film and TV seems to be UHD TV, Ultra High Definition TV.

Canal+ Spain dubbed today ‘4K Day’

Here is their 4K promotional video that I think was broadcast by satellite today and uploaded to YouTube.

If you have software that can download YouTube videos, you can get this footage if you want to practice your 4K post workflow.

For example if you use Safari and if you have the ClickToPlugin Safari Extension, you should be able to select 4K MP4 from the invisible top-left pop-up menu and then download the 1.5GB file to your computer.

Here is an example of how much detail there is in a 4K frame that was encoded using the UHD-1 flavour of H.264 – 3840 x 2160 at 25fps. The 1.5GB MP4 file had an average data rate of 22mb/s.

Click it to see the pixels at 1:1.

4K-video-captured-from-satellite

4K is big news for production designers and makeup artists!

4K-video-captured-from-satellite-2

Click to see at 3840 by 2160.

If you visit Apple’s 2013 Mac Pro page, you’ll see a sneak peek of the Mac Pro being released later this year.

One feature of the page are videos showing internal elements of the computer. In order to have a close look at how it’s put together, here are the videos as a single movie:

If you want to get the source video, visit its Vimeo page and click the Download button. This is is helpful if you’d like to step through the video frame by frame.

Room for improvement?

As regards how upgradeable the computer will be, it might be almost as easy as the old Mac Pro to modify. Apple have already stated that memory and flash storage will be user-configurable, however based on stepping through the Apple video, it seems as if the central chassis is put together with hex screws, and the three main boards are attached with a few screws:

mac-pro-2013-graphics-cards

The base board is then attached to the three card sockets:

Mac-Pro-base

Perhaps Apple will offer a configuration of the Mac Pro with only one GPU card, and publish the specs for third parties to supply CPUs.

Configuring Mac Pros with alternate GPU cards will have to be done by confident engineers, but probably won’t require a visit to an Apple Store.

According to a ‘friend of a friend’ report from the Worldwide Developer’s conference posted to the CGSociety forum:

OK – I have a friend the WWDC and he has asked a lot of questions to the right guy. The Graphics cards in this new macpro are swappable. But they are bespoke and a new form factor it seems. Ram / GPU and the Main drive is all updatable – it does seem that there is the possibility of installing 2 or more of these PCIe drives…

Future versions of the Mac Pro may have more space for GPU cards. With a slightly larger enclosure, there could be four instead of two:

3-cards,-5-cards

Professionals welcome

With the announcement of this computer, there’s no doubt that Apple is still interested in professional markets. Despite the relatively limited opportunities for making billions in profits, they must see value in serving those who want the fastest personal computers in their offices.

The most distinctive feature of the sneak peek is the fact that Apple felt under enough pressure to pre-annouce the computer at all. The Apple of 10 years would have created an Autumn 2013 event in Los Angeles featuring professionals from the film, TV and music industries extolling the virtues of Apple’s professional hardware and software solutions.

What else can we get them to do?

Watching ‘Hustle’ on the BBC this evening, I noticed a ‘good enough’ day for night shot.

It was made obvious by a transition directly from the day version of the setup:

To the same shot colour-corrected to look like night-time:

Click the shots to see bigger versions.

They used a flat monotonous sky to pull a key, but they ended up letting quite a lot of the tops of the trees floating in mid-air. Some of the house roof details vanished too.

They even added an owl hoot to the soundtrack to sell the idea. To imply that they had crossfaded between two shots, they moved the second shot down a little so that the whole image changed.

For the budding colourists, you can use these images as before/after references on how to change a day shot to look as if it were shot at night.

If you’re in the UK, you can see the the original episode for the next few weeks. Spool to (27:51).

Don’t give actors undoable emotional directives such as: “Be disappointed.” You are almost guaranteed an insincere result.
[…]
An excellent way of expressing an action, however, is to prompt the actor to focus on how they want the other person to feel.
Paul Newman once said the best direction he ever got was: “Crowd the guy.”

Notes on Directing

In which I transcribe some notes I took at a trade fair about the BBC works with surround sound in their TV productions.

At the Broadcast Video Expo today I heard some useful tidbits from Chris Graver, a dubbing mixer with the BBC. He presented a seminar on 5.1 sound.

– Although you might see multiple speakers along the side walls of some cinemas, the sound usually is still 5.1. The same signal is being sent to most of the speakers.

– The six channels of sound are known as ‘5.1’ because the sixth channel (for low frequency effects) has a tenth of the bandwidth of the first five. L, C, R, Ls and Rs have a frequency range of 20-20,000Hz, LFE has a range of 0-200Hz. This channel isn’t usually included in the stereo downmix

– 5.1 ambience shouldn’t be noticeable enough so that people keep looking over their shoulders – you want them looking at the screen.

Dolby E

– A transportation encoding. Not for consumer use. As some delivery media only have four channels of audio, the first two are for the stereo downmix, 3 and 4 hold an encoded Dolby E soundtrack. This soundtrack delivers the 5.1 channels to the distribution systems of the broadcaster/publisher.

– Dolby E has a 1 frame encoding delay. This means you must advance your Dolby E soundtrack by 1 frame to match picture and the stereo downmix

– Dolby E has a 1 frame decoding delay too, but most decks can be set to account for this delay. A minority of broadcasters requiring Dolby E encoded 5.1 expect it be advanced an extra frame (for a total of two) to sync with picture and stereo

– The BBC and others will fail your programme in tech review if you do not add the metadata tag stating the correct Average Dialogue Level. In worst case scenarios, consumer kit might filter some of the dialogue bandwidth out if it doesn’t know the extents of its range.

– Make sure you use the correct track order before encoding as Dolby E: L, R, C, LFE, Ls, Rs – otherwise you might get your Back Right channel filtered as if it was the LFE channel!

In which I take an Apple patent and suggest that it could form the basis of a new collaborative on-location application for the cloud, iPhone and iPod Touch for TV and film makers.

Storyboards are fine in principle, but crews need to use enough setups to cover enough angles to capture the drama so that directors and editors can later tell the story in ways that that they didn’t plan.

The recent patent granted to Apple is more about shoot planning than storyboarding. Instead of creating a comic-book simulation of a potential film, it helps movie makers plan how to cover the action in a scene.

scene-planning-patent

In a potential ProApps product, Apple imagine using the script to plan where characters will stand, how they’ll move and where the camera will be to film it, and possibly where the camera will be when getting different close-up, medium and wide shots.

Another aspect of this patent (according to the text at the World Intellectual Property Organization) implies that the output of this system wouldn’t be paper printouts to go with script sides. As at least two of the authors are from Apple’s iPhone team, maybe this system is about creating and maintaining a model for how production will proceed.
weshoot-iphone
A model that location managers, art directors, set dressers, continuity people, crew, caterers, actors and the post-production team will have continual access to using digital technology – on browsers and iPhones (which may be in Airline Mode some of the time).

This tool should have post-production uses too. It might replaced lined scripts. For an explanation of lined scripts (and how they are used with Avid’s ScriptSync feature), read Oliver Peters’ article on his blog.

Instead of lines showing number of setups and number of takes being written on the script, the editor will be able to look at the footage captured in the context of the scene in 3D-space. It’s interesting that Apple might now attempt to introduce new organisational techniques that supplant the methods used over the last 75 years.

As an aside, this is the first patent that reminds me of a book. If it comes to pass, this system will help you plan your film following the tenets of Daniel Arijon’s Grammar of the Film Language – a useful director’s text from 1976 (check out the positive reviews on Amazon).

15 years ago I wrote an essay called “What if Media was Media?” It was based around an idea that might interest others, but I wasn’t sure what to do about it. As I wasn’t on the internet back then, all I could do was print it out and give it to a few people who might be able to help me…

The core point was that people may come to understand copyright more deeply because computer file formats will have layers of rights information built-in. In 1994, people hardly ever referred to contents of computer files as ‘media.’ I was imagining a system where all movies, TV, radio and music was created, distributed and delivered in digital forms.

I saw that the flexibility of digital media would make it much easier for old-fashioned media to be copied. To facilitate ubiquitous distribution, I thought it would be interesting if the file format itself included information on the rights-holders.

Imagine buying a video camera, before you first use it, you enter unique contact information (possibly pointing to a .tel registry entry). The camera would then encode your ID into all the footage you shoot. You might even choose a default copyright statement too: ‘©2009 Alex Gollner – For rights see fee table at alex4d.tel’

Once the rights information is included with the footage, then every time the footage is played elsewhere, the playback software will determine whether the person watching the footage would want to pay a one-off fee, or license to watch as many times as they want. Of course they could get an advertiser to pay on their behalf:


An imaginary ‘media payment preferences’ control.

They would also choose whether they want to watch on their own, or play it to larger audiences:

The system could also take into account times when footage is incorporated into other productions. If you witnessed the feel-good story of the week – when a talented and brave airline pilot saved passengers and crew by landing his stricken plane on the Hudson – and shot footage that news organisations all over the world wanted to show, they could upload it from your camera. If media rights were encoded into the file, each time the news item is shown on TV, from an archive, streamed on a corporate website or even embedded elsewhere, you would get a cut of the fees paid.

It’s a dilemma. On one hand ‘the little guy’ would automatically get paid. On the other, everyone who has a camera pointed at them will want to know what’s in it for them…

This is an example of what I’ve been talking about. A twitter thought leads to a blog post… or two.

When I woke, my guest was watching TV. Part of the show was an interview with a French person. His voice was dubbed. As I know a little and my friend knows all French, it was a pity that we couldn’t hear what they had to say while reading the subtitles if needed.

It seems that dubbing foreign speech has become much more common that subtitles in the last 10 years. This is true of even the most highbrow TV news programmes. In 1995 they would have subtitled non-English speech. Now they hardly ever do.

There are two explanations: that TV producers and news editors think audiences are put off by subtitles, or that subtitling technology hasn’t kept up with the world of simpler post production – compared with dubbing.

I’d like to assume the latter for the moment. What is it about subtitling that makes it more difficult to organise than dubbing. It is that it isn’t too difficult to get a simultaneous translator to translate and speak at the same time, whereas producing well-written and well timed subtitles is hard.

For live TV, there is an interesting solution. Subtitle describers are employed to repeat what people are saying and what sounds can be heard into a speech recognition package, which produces subtitles for those who turn them on using their remote controls. All non-satellite TV channels have subtitles on 97% of all shows, this is how they provide the service.

This points up that editing software should not treat subtitling as an effect that is laid on top of video at some point, or only implemented when making a DVD. Maybe it is time that script, music and sound effect information is associated directly with audio clips so that scratch subtitles track could automatically be generated. Then professional summarisers and designers would clean them up before the production is delivered online, on DVD or broadcast.

A thought leads to a blog post… or two

So the original thought was “With people speaking foreign languages, over the last twenty years the technology of subtitling has fallen behind dubbing. A pity.” – Which is what I posted to Twitter 11 hours ago, before going out and having a great day in London. I didn’t think about it until I got back a short time ago and saw that Matt Davis had written a blog post partially inspired by my tweet.

Matt Davis' Twitter profile picture
His idea is much bigger than mine – maybe leading to a whole new media for a Social Media platform to share and discuss. That’s why you should check it out (Also follow Matt on Twitter if you like or blog his feed).

I then decided to write a post about Twitter and blogs, which meant turning my initial thought into (almost) an idea.

This is how Twitter and blogs can work.

Given the nature of modern production techniques I wonder if the job or second assistant camera and apprentice editor might be combined in the near future. Any problems with digital files need top be caught at any stage in the process.

This could be the job – loading the solid-state memory onto the camera, where once the camera would need unexposed negative in 1000 ft reels. Once the memory is used up, the loader needs to load it into a computer or separate storage device and load a backup onto a different device. Once these copies are demonstrably OK, the storage device is erased in preparation for loading onto the camera again. Then the loader could go to the editing system and load the footage onto the computer.

The reason why it could be a good idea to use the same person, would be to safeguard the information associated with each take. The setup number, take, camera number, frame rate, scene name and timecode can be incorporated into each digital file from capture to final grading.

Due to scheduling and budget considerations, there are no apprentice editors and few second assistant camera people on many productions. It’s up to the editing and camera team to work together as if they were combined into one person. Especially as it is the job of the assisting team to create the environment for the editor to edit, to make artistic decisions – to make sure they need not know the ins and outs of the newest software upgrades and bugfixes from Apple, Avid, Adobe, Panasonic and Red.