3D editing secrets
Yesterday, the Hollywood Reporter announced that Avid are researching ways to making their products work with 3D footage. I would characterise the kind of footage they mean as being ‘2.5D’ – two cameras shoot simultaneously from slightly different positions to simulate human stereoscopic vision.
The article refers to the ‘Over and under’ 3D technique. In the days of film, that meant that each frame of celluloid had two slightly different images – anamorphically squeezed so one appeared above the other. These days it probably means that each moment in time is represented by two pictures in a single file, i.e. at 01:04:25:16 in the media file there are two images – one for the left eye, one for the right.
Avid’s current plan is for editors to edit away in 2D – only displaying what one of the two ‘eyes’ would see in the scene. Every once in a while, they could choose a special command that lets them review the cut in 3D.
Editing 3D will only become mainstream once the price of the camera systems come down. The Fusion system uses two Sony F950s (so that’s over $230,000 just for the cameras). There is a system that 21st Century 3D have developed, but it isn’t for sale. They’re going the Panavision way and only making their technology available via hire – with mandatory employment of their staff to go along with the kit. They’ve taken a couple of Panasonic HVX100 SD cameras, synced them together, added 4:4:4 direct to storage recording and combined them in one 24lb package:
Funnily enough, they also require that they are in on the editing of your production too. From their FAQ:
…there is more to the editing process than just matching all your cuts. It is also important to note that our 3DVX3 camera system records RAW CCD data that must be converted by 21st Century 3D in order to be edited in standard NLE software. 21st Century 3D does work with our clients who want to edit their own videos by providing 2D window dubs that you can edit. Send us your Final Cut Pro project file, an EDL or the window dub edit and we will conform your 3D show.
Can someone from 21st Century 3D come to my office and show me how to edit 3D videos?
Unfortunately no. 21st Century 3D utilizes techniques that are in some cases proprietary and have been developed over the course of years.
I suppose you could do it with multicam mode when editing, place the sequence in a 48p sequence to view in 3D using a fxplug scripted plug-in.
I’m surprised that companies such as 21st Century 3D think that it is possible to keep post-production secrets. I doesn’t sound like too much of a challenge to me, but maybe I haven’t thought it through. I wonder if the aesthetics of editing 3D can also be kept secret too. People thought that editing with the Cinemascope 1:2.35 required a new visual language.
21st Century 3D believe that the best results come from having a large depth of field. They want to give the audience the choice of what to focus on. I think that cinematographers and editors have spent the last 100 years using depth of field and focus to direct the audience’s view. We should have a good idea of which part of the frame they are looking at. That determines the timing of the next shot – we need to know how long it takes for the audience to notice the edit and then search the new shot to find the most interesting thing to look at before we let new information be conveyed (a person’s expression changes, a bomb starts ticking). If we still can use framing, composition, sound, a shallow depth of field and focus to direct the audience’s eyes, we may need to take account of how much longer it takes for people to find what we want them to look at if they are looking at 3D footage.
What else determines how we’ll be editing 3D footage?
I already tried with Final Cut Pro, After Effects and $300 Matrox Triple Head2Go. Only Problem with Final Cut Pro is when i stop the playhead I am able to get 3D by splitting the side by side footage and sending to 2 projectors. The same the case with After Effects. Only problem is when i hit play the display only output single image stretched across 2 monitors.
There are very small workaround need to be done by Adobe / Apple to view with $300 Matrox Triple Head2Go, Polarized 3D using side-by-side technique. More information in coming days.
I didn’t think of using a Mac for projection. I suppose you could use multi-screen multi-computer synced systems such as Dataton Watchout ( http://www.dataton.com/watchout ) and Vista Spyder ( http://www.vistasystems.net/what_is_spyder/features.asp ) for stereoscopic projection.
I was thinking mainly about the post element of 3D production. Real 3D and Disney Digital 3D require projectors that can run at 144 frames per second!
I agree the Vista stereoscopic 3d player would be a good option – http://vistasystems.net/stereoscopic-3D-imaging.asp
Why did you use a triple head to go as opposed to the double?
> Why did you use a triple head to go as opposed to the double?
I guess he used the tripple verion instead of the dual because only the triple version has DVI inputs and aoutputs – of course on may use the triple version with only 2 displays/beamers.
I used Triple Head because of DVI out / in.
In your post, you said they are throwing out 100 years of editing techniques. This is true but you have to realize there is a fundamental differences in the way the human brain works in stereo vision and 2D. A new cinematic language must be learned. These techniques of depth of field and directing your eye with a rack focus work in 2D, because the brain accepts them in 2D and they are pleasing. This is not the case with stereo vision. When you look at a scene with shallow depth of field your brain and eyes will attempt to focus it out. When it can’t because the focus is “baked” into the film, this causes eyestrain, fatigue, and nausea. Its fighting your human condition to wander the eye and resolve it. In 3D we are fooling the brain into seeing something that is not real to be perceived as real. and if you eyes can’t wander the frame and choose to focus on far, near, and middle then you as a story teller have lost the battle, because you’ve lost your audience and made the experience very unpleasant.
I think only time will tell whether a new cinematic language will need to be learned. It took a little time for people to get used to seeing 2D film a hundred years ago. There were reports and concerns of the same physiological problems back then. These days, raising such concerns would bring derision and laughter because we’ve been raised from children watching video.
The brain and mind are tremendously flexible and adaptable. This is re-enforced by 19th and 20th century experiments with *radically* altering vision (e.g. using glasses that turn everything upside down), which consistently demonstrate that the brain *completely* adapts to radical changes in vision in ten days (mean).
This and my own 3D experiences lead me to not seeing a fundamental difference in brain operation between 2D and 3D. I believe that once people get used to 3D films, the “2D” techniques will be back in play and the concern over nausea, etc. will be forgotten just as the same concerns with 2D film (last century) have been forgotten.
This is very good blog for our Editting Guys.
If u have time then plz – say
“which is good film editting computer system configuration?”
Quantel has their standalone dedicated 3D editing system:: surely another is near…….
please at a huge savings over Q?
Instead of 2D transitons – start imagining 3D transitons.
The other consideration is based on this observation – I saw The Ant Bully in 3D. The use of stereographics was terrific, until the end credits, when I nearly puked from motion realated nausea.
If the camera swings violently during and immersive medium as 3D, and the inner ear is saying no movement, then the brain’s conclusion is that there is a poisoning and the best remedy is to do the best to get rid of it quickly. (no other means are tried as they will either kill themselves or be ineffective. )
Unlike the lens and prism experiments of the past, the audience will not get 30 or 40 hours to adapt, they will have a few seconds to a few minutes. Unlike the participants in the vision experiments, they are not volunteers.
Finally, the point of stereo vision is to enhance the recognition of elements in a confusing visual space. This is what stereograms depend on.
Pretty interesting 3D rigs and post production tools are offered by 3ality digital http://www.3ality-digital.com.
Anybody can tell me how much time is required to become an expert in 3D Max.
I’ve tried out all the 3D editing systems and the one that worked the best for me was stereo 3d toolbox. It’s a plug in for final cut pro.
There is no need to use Mac for projection.But all others are good.