Category Archives: Uncategorized

Duran Duran Mega Shoot – Berlin – Almost!

The Berlin Duran Duran shoot was quite an adventure that twisted and turned this way and that. The plan was to shoot a Duran Duran concert using a range of Sony Super 35mm camcorders, however in the days running up to the shoot the band had been forced to postpone some other gigs due to illness. On the Monday before the shoot we were all sat at home waiting for the go – no-go phone call from the producers. The call came at 9pm, we were go, so first thing Tuesday I was off to the airport with 75kg of kit to fly out to Berlin with the very real threat of either Heathrow airport or Berlin airport getting shut down by Volcanic ash from Iceland. In the end my flight left 20 mins early and the plane was even backing away from the gate well before everyone had taken their seats in a mad dash to get to Berlin before airspace got closed.

The first of the cameras arrive.

My self and Den Lennie (of F-Stop Academy) in the advanced party got into Berlin OK and spent Tuesday collecting some of the rented and borrowed kit and getting it in to the venue.
However by the Wednesday morning the whole shoot was turning into a serious challenge as Berlin airport was closed by the Ash cloud from the Iceland volcano just as key members of the crew including Gavin the director and James the producer were due to fly in. They ended up going to Dusseldorf and getting the train up to Berlin. In addition some of our rented kit was delayed as well as the stage and rigging crew, so everyone was running behind, frantically trying to source more kit locally. We have to say a BIG thank you to FGV Schmidle in Berlin who went out of their way to help us out.

 

5 of the 6 PMW-F3's awaiting setup.

We had 6 F3’s, 2 FS100’s the SRW9000PL and an EX3. The EX3 was going to be used on the back of a Canon HJ21x7.5 Cinestyle lens with a 2x extender from the back of the venue to get some close up shots that we just could not get with any of the PL mount lenses we had on the 35mm sensor cameras. Long, fast 35mm lenses are few and far between.

 

To get the look that we wanted the cameras were all set up with custom picture profiles. I designed a picture profile for the F3’s that would give maximum latitude to help handle the high contrast range that the concert lightning would bring as well as de-saturating the image to prevent the coloured lights from clipping and thus give more scope for grading and post work. Detail correction was set up to give a small amount of very fine detail boost to keep the images crisp without looking like video.

Optimo 24-290 Zoom Lens on F3

 

 

Two of the Sony PMW-F3’s were kitted out with Angenieux Optimo 24-290 T2.8 lenses and Pre-Production Zacuto EVF’s. What a gorgeous lens, the EVF’s aren’t bad either! Hopefully I’ll get more time to play with both of these in the future and a review of the EVF’s should come soon. The Optimo’s allowed us to get beautiful mid and close up shots from the venue sides with nice bokeh and super shallow DoF. At the rear of the venue as well as the EX3 we had an F3 with an Angenieux Optimo 15-40 on a track to shoot wide shots of the stage through the crowds. The remaining F3’s were to be used with Nikon DSLR lenses in the 75 to 300mm range via MTF adapters (thanks Mike) and a prototype Adaptimax adapter (Thanks Steve). The other F3’s were going to go on tracks at the front of the stage or on the stage wings to pick off close ups of instruments and band members. We also had a pair of Sony MC1P mini-cams but we could not rig these until the stage crew arrived and we weren’t expecting them until early on Thursday morning, the day of the shoot. The FS100’s would be on stage, hand held and on tracks using prototype Birger mounts and Canon L series lenses.

Then the bombshell dropped. The event was postponed. The lead singer Simon LeBon has been suffering from Laryngitis and he still wasn’t well enough to sing. So the remainder of the evening was spent packing all the kit away and rebooking flights and schedules. The concert will now be held on the 8th of June, again in Berlin. I’m going to be flying back to London from Cinegear and a 3D event at Samy’s cameras on the 6th, passing through London ( 3 hours between flights) on the 7th where I will pick up my F3 kit and then travel on the Berlin, where we will once again try to complete the shoot. Photo’s and more gear porn to follow.

Testing, testing…. Canon 800/1600 mm lens on F3

Canon 800mm lens on PMW-F3

In preparation for the big Duran Duran shoot in Berlin later in the week I was out with Den Lennie of F-Stop academy along with Duran Duran video producer Gavin Elder and James Tonkin of Hangman Studios testing the Canon 800/1600 f5.6 lens on my F3. This is an adapted DSLR lens fitted with a PL mount. What a lens! The bokeh was simply gorgeous from this lens and I’m really excited about putting it to use in Berlin on Thursday night. Keep tuned for more info on this BIG project shooting with F3’s, FS100’s and SRW9000PL’s. We’re even throwing in a VG10 or 2 for good measure! Nine cameras in total, ultra shallow DoF is the goal, gonna be hard to do, but it should look awesome.

PMW-F3 and FS100 Pixel Count Revealed.

This came up over on DVInfo.

An F3 user was given access to the service manual to remove a stuck pixel on their F3. It was found in the service manual that you can address pixel manually to mask them. There are  pixel positions  1 to 2468 Horizontally and  1 to 1398 vertically. This ties in nicely with the published specifications of the F3 at 3.45 Million Pixels.

At the LLB (Sound, Light and Vision) trade fair in Stockholm this week we had both a SRW9000PL and PMW-F3 side by side on the stand, both connected to matching monitors. After changing a couple of basic Picture Profile settings on the F3 (Cinegamma 1, Cinema Matrix)  Just looking at the monitors it was impossible to tell which was which.

Sonnet QIO Review – Really, really fast!

Sonnet QIO

I had heard about the QIO some time ago, so I approached Sonnet to see if I could borrow a unit to review. I was given the loan of a Sonnet QIO at NAB. I have been playing with it since then and you know what, it’s a great device. So what exactly is it? Well it is an extension box that allows you to connect a range of peripherals and flash memory cards to your computer via the PCI bus. The reason I wanted to borrow one was because the QIO is one of the few devices (the only device?) that allows you to connect SxS, Compact Flash and P2 cards to a computer using the high speed PCI bus with hot-swappable functionality.  Hot Swap means you can eject and remove cards without having to re-boot the computer or do anything else, something that some of the other adapters on the market force you to do.

PCI-E extension board.

Installation was very straight forward. On my Mac Pro workstation I had to plug in a small PCI-X card into one of the vacant slots inside the rear of the machine. This is easy to do and should not put anyone off buying the device, it took me all of 5 minutes to plug the card in and install the drivers. Then a short cable runs from the back of the Mac Pro to the QIO and a separate power supply is plugged into the QIO for power.

 

 

 

On my Mac Book Pro I simply slotted the Sonnet express card PCI bus expansion adapter into the express card slot and then connected this to the main QIO unit via the extension cable and installed the drivers, again a 5 minute job, very simple.

PCI-E Express Card Slot adapter

If you do want to use it with a Mac Book Pro, you will need a model that has the express card slot. At the time of writing the device only works with Mac’s, but Windows support should be coming very soon. When buying a QIO there are two versions. The desktop version supplied with the desktop adapter or the laptop version with the express card slot adapter. The functionality is the same for both, it’s just a case of which adapter you need. You can buy the alternate adapter should you want both as an accessory.

So, I have it installed, how is it to use?

It’s really extremely straight forward. You simply pop your media into the slot and away you go. When your done with that card you eject it as you would with any other removable media and stick in the next card. On the workstation this was so much better than plugging in my XDCAM camcorder via USB.

Of course convenience is one thing, but how about performance? The QIO is fast, very fast. I was able to offload a full 16Gb SxS card in about 150 seconds, less than 3 minutes to the internal drive on the Mac Pro. That equates to an hours worth of XDCAM EX material in around 3 minutes or 20x real time. The performance for compact flash cards doesn’t disappoint either at around 15 seconds per Gb so clearly the transfer speed is limited by the speed of the CF card and not the connection as would be the case with USB or firewire. If you want to use the QIO for SD cards then you can use the supplied adapter. Again the performance is very good, but not as good as SxS and CF due mainly to the lower speeds of the SD cards.

Laptop Performance and Expansion.

One of the issues with Laptops is how do you expand them? It’s all very well being able to put an SxS card into the express card slot for fast off load, but where do you then put the material? On a Mac Book Pro you do have firewire 800 but this is still nowhere near as fast as the SxS card. As the SxS card is in the express card slot you can’t use it to add an eSATA drive, so your a little stuck. But not with the QIO. You see the QIO has a built in eSATA controller and 4 eSATA connectors on it’s rear. This means that you can plug in one or more eSATA drives to the QIO and transfer directly from the SxS card to an eSATA drive or drives. So now even on my Mac Book I can make multiple eSATA copies of my media at speeds of up to 200MB/s (total). So once again the speed is usually limited by the card and not the interface.

Torture Test:

For a real torture test I put two full 16Gb SxS cards into the QIO and offloaded both cards at the same time to the Mac Pro’s raid drive. Where one card had taken a little under 3 minutes, two cards took abut 190 seconds, just a little over 3 minutes. Transferred this way, two cards at a time you could offload 2 hours of XDCAM EX material in around 4 mins, that’s an incredible 30x real time. I tried the same test with CF cards  and again there was little difference in transfer speed between one card and two cards.

Conclusions:

This is one fast device. If you have lots of media to off-load and backup it’s going to save you a lot of time. If you are a production company that works with large volumes of solid state media it will pay for itself very quickly in saved man-hours. If your working in the field with a Mac Book Pro the ability to connect both the media and eSATA devices at the same time makes the QIO a very interesting proposition. It is well constructed, simple to install and use, what more could you ask for.

Value for money?

That’s a little harder to answer. It depends on how much material you work with. It’s a fairly pricey device at around $800US or £700GBP for a card reader, but the time savings are substantial, especially if you are asking people to back up material at the end of a days shoot. The faster it can be done, the more likely it is that it will be done straight away, rather than put off until later. It’s also a lot more than just a card reader, the eSATA ports make it so much more useful for connecting drives or even a raid array to a laptop. Overall I think it is actually well worth the investment for the time savings alone. 8/10 (it would have been 9/10 if it didn’t require the power adapter). Great product.

 

I approached Sonnet and requested a loan QIO for this review, which Sonnet provided. I was not paid to write this and the views expressed are entirely my own. Speed tests were conducted using my own SxS (blue) cards with the QIO attached to a 1.1 first generation Mac Pro with an internal 4 drive raid array, or with a 15″ Mac Book Pro.

Why rendering form 8 bit to 8 bit can be a bad thing to do.

When you transcode from 8 bit to 8 bit you will almost always have some issues with banding if there are any changes in the gamma or gain within the image. As you are starting with 8 bits or 240 shades of grey (bits 16 to 255 assuming recording to 109%) and encoding to 240 shades the smallest step you can ever have is 1/240th. If whatever you are encoding or rendering determines that lets say level 128 should now be level 128.5, this can’t be done, we can only record whole bits, so it’s rounded up or down to the closest whole bit. This rounding leads to a reduction in the number of shades recorded overall and can lead to banding.
DISCLAIMER: The numbers are for example only and may not be entirely correct or accurate, I’m just trying to demonstrate the principle.
Consider these original levels, a nice smooth graduation:

128,    129,   130,   131,   132,   133.

Imagine you are doing some grading and you plugin has calculated that these are the new desired values:

128.5, 129, 129.4, 131.5, 132, 133.5
But we cant record half bits, only whole ones so for 8 bit these get rounded to the nearest bit:

129,   129,   129,   132,   132,   134

You can see how easily banding will occur, our smooth gradation now has some marked steps.
If you are rendering to 10 bit you would get more in between steps.
If you render to 10 bit then when step 128 is determined to be be 128.5 by the plugin this can now actually be encoded as the closest 10 bit equivalent because for every 1 step in 8 bit there are 3.9 steps in 10 bit, so (approximately,translating to 10 bit) level 128 would be 499 and 128.5 would be 501
128.5 = 501

129 = 503

129.4 = 505

131.5 = 513

132 = 515

133.5 = 521

So you can see that we now retain in-between steps which are not present when we render to 8 bit so our gradation remains much smoother.

Whites, Super Whites and other Bits and bobs.

Do you know how your NLE is handling your video, are you whites white or whiter than white or does this sound like a washing powder add?

In the analog world you shot within the legal range of black to 100% white. It was simple, easy to understand and pretty straight forward. White was white at 100% and that was that. With digital video it all gets a lot more complicated, especially as we now start to move to greater and greater bit depths and the use of extended range recording with fancy gamma curves becomes more common. In addition computers get used more and more for not just editing but also as the final viewing device for many videos and this brings additional issues of it’s own.

First lets look at some key numbers:

8 bit data gives you 256 possible values 0 to 255.

10 bit data gives you 1024 possible values, 0 to 1023.

Computers use bit 0 to represent black and bit 255 or 1023 to represent peak white.

But video is quite different and this is where things get messy:

With 8 bit video the first 16 bits are used for sync and other data. Zero or black is always bit 16 and peak white or 100% white is always bit 235, so the traditional legal black to white range is 16 to 235, only 219 bits of data. Now in order to get a better looking image with more recording range many cameras take advantage of the bits above 235. Anything above 235 is “super white” or whiter than white in video terms, more than 100%. Cinegammas and Hypergammas take advantage of this extra range, but it’s not without it’s issues, there’s no free lunch.

10 bit video normally uses bit 64 as black and 940 as peak white. With SMPTE 10-bit extended range you can go down to bit 4 for undershoot and you can go up to bit 1019 for overshoots but the legal range is still 64-940. So black is always bit 64 and peak white always bit 940. Anything below 64 is a super black or blacker than black and anything above 940 is brighter than peak white or super white.

At the moment the big problem with 10 bit extended (SMPTE 274M 8.12) and also 8 bit that uses the extra bits above 235  is that some codecs and most software still expects to see the original legal range so anything recorded beyond that range, particularly below range can get truncated or clipped. If it is converted to RGB or you add an RGB filter or layer in your NLE it will almost certainly get clipped as the computer will take the 100% video range (16-235) and convert it to the 100% computer RGB range (0-255). So you run the risk of loosing your super whites altogether. Encoding to another codec can also lead to clipping. FCP and most NLE’s will display super blacks and super whites as these fall within the full 8 or 10 bit ranges used by computer graphics, but further encoding can be problematic as you can’t always be sure whether the conversion will use the full recorded range or just the black to white range. Baselight for example will only unpack the legal range from a codec so you need to bring the codec into legal range before going in to baselight. So as we can see it’s important to be sure that your workflow is not truncating or clipping your recorded range back to the nominal legal or 100% range.

On the other hand if you are doing stuff for the web or computer display where the full 0 to 255 (1023) are used then, you often need to use the illegal video levels above 100% white to get whites to look white and not bright grey! A video legal white at 235 just does not look white on a computer screen where whites are normally displayed using bit 255. There are so many different standards across different platforms that it’s a complete nightmare. Arri with Alexa for example won’t allow you to record extanded range using ProRes because of these issues, while the Alexa HDSDi output will output extended range.

This is also an issues when using computer monitors for monitoring in the edit suite. When you look at this web page or any computer graphics white is set at bit 255 or 1023. But that would be a super white or illegal white for video. As a result “in-range” or legal range videos when viewed on a computer monitor often look dull as the whites will be less bright than the computers own whites. The temptation therefore is to grade the video to make the whites look as bright as the computers whites which leads to illegal levels, clipping, or smply an image that does not look right on a TV or video monitor. You really need to be very careful to ensure that if you shoot using extended range that your workflow keeps that extended range intact and then you need to remember to legalise you video back to within legal range if it’s going to be broadcast.

Comparison clips to download.

Here is a set of 3 clips in the native formats taken with a Sony VG10, Canon t2i (550D) and sony F3.

CLICK HERE for the zip file containing the native fies (canon .mov, sony .mts and Sony BPAV folder) or click here to watch on vimeo. If you are going to watch on vimeo I would strongly urge you to take a look at the full size frame grabs below before coming to any conclusions.

I used the same Nikon 50mm f1.8 lens on all 3 cameras (MTF F3 adapter, cheap E-Mount adapter and cheap Nikon to Canon adapter). I had the lens at f8-f11 for all three cameras and used the shutter to control exposure or in the case of the F3 the ND filters. All were set to preset white, 5600k, the sky was visually white with flat hazy cloud. The VG10 was at factory default, the t2i was default except for Highlight Tone Priority which was ON and the F3 default with the exception of Cinegamma 1 being chosen.

PLEASE PLEASE PLEASE Don’t link directly to the download file, instead link to this page. Feel free to host the clips, just remember they are my copyright so include a link back here or a note in any text of where they originated.

PLEASE make a donation of whatever amount you feel appropriate if you find these clips helpful, to help cover my hosting fees if you choose to take advantage of these otherwise free clips. It’s a 340Mb download. As of May 9th, 122 people have downloaded the clips, that’s 41Gb of web bandwidth, yet not one person has made a donation. Come on guys and gals, if you want me to  make clips available to download, help me out.

Below are some frame grabs from the 3 cameras. If you click on the pictures a couple of times they will open full size in a new window. All 3 cameras do a pretty decent job overall. However both the VG10 and t2i have issue with aliasing on the brickwork of the far building. I know the idea with these cameras is to use a shallow DoF so often the background will be soft, but not everything will be like that all the time. There are also more compression artefacts from both the t2i and in particular VG10 (the barbed wire at the beginning of the pan looks pretty nasty). At least with the VG10 you can take the HDMI output and record that externally. Clearly the best pictures are from the F3, but then it is considerably more expensive than the others. It is interesting to note the distinctly yellow colorimetry of the F3. I do have matrix settings to reduce this, but I did not use them during this assessment.
Also note how much wider the FoV is with both the Canon t2i and even more so the F3. Clearly these cameras have larger sensors than the VG10, the largest being the F3’s Super35 sized sensor. This was another surprise, I had assumed the Canon and F3 sensors to be much closer in size than this. Remember that all three used the same lens and the shots were done from exactly the same place.
You can also view the clips on Vimeo http://vimeo.com/23315260
NEX-VG10
t2i -550D
PMW-F3

 

Focal length conversion factor should apply to the camera not the lens.

I was asked in some post comments whether the a 50mm PL mount lens would give a wider picture than a 50mm DSLR lens. This confusion comes about I believe because of all the talk about focal length conversion factors. I don’t think this concept is well understood by some people as the implication is that somehow the lens is changing when its used on different cameras, when in fact it’s the camera that is different, not the lens.

It is important to understand that a 50mm lens will always be a 50mm lens. That is it’s focal length. It is determined by the shape of the glass elements and no matter what camera you put it on it will still be a 50mm lens. A 50mm DSLR lens has the same focal length as a 50mm PL mount and as a 50mm 2/3″ broadcast lens. In addition the lens focuses a set distance behind the rear element, agin the distance between the rear element and where it focuses does not change when it’s put on different cameras, so an adapter or spacer must be used to keep the designed distance between the lens and sensor, this distance is called the “flange back”.

The key thing is that it’s not the lens or it’s focal length that changes when you swap between different cameras. It is the size of the sensor that changes.

Imagine a projector shining an image on a screen so that the picture fills the screen. The projector is our “lens”. Without changing anything on the projector what happens if you move the screen closer or further away from the projector? The image projected on the screen will go in and out of focus, so that’s not good, we must keep the projector to screen distance constant, just like the lens to sensor distance (flange back) for any given lens remains constant.

What happens if we make the screen smaller? Well the image remains the same size but we see less of it as some of the image falls of the edge of the screen. If our projected picture was that of a wide landscape then on the reduced screen size what would now be seen would not appear less wide as we are now only seeing the middle part of the picture. The width of the view would be decreased, in other words the FIELD OF VIEW HAS NARROWED. The focal length has not changed.

This is what is happening inside cameras with different size sensors, the lens isn’t changing, just how much of the lenses projected image is falling on or off the sensor.

So the multiplication factor should be considered more accurately as being applied to the camera, not the lens and the multiplication factor changes the field of view, not the focal length.

So whether it is a PL mount lens, a Nikon or Canon DSLR lens or a Fujinon video lens, if it’s a 50mm lens then it’s a 50mm lens and the focal length is the same for all. However the field of view (width and height of the viewed image) will depend on the size of the sensor. So a 50mm PL lens will give the same field of view as a 50mm DSLR lens (no matter what camera the lens was designed for) on the same video camera.

The only other thing to consider is that lenses are designed to work with certain sizes of sensor. A lens designed for a full frame 35mm sensor will completely cover that size of sensor as well as any sensor smaller than that. On the other hand a 2/3? broadcast lens will only cover a 2/3? sensor, so if you try to use it on a larger sensor the image will not fill the frame.
The sensors in the Sony F3 and FS100 are “Super 35mm”. That is about the same size as APS-C. So lenses designed for Full frame 35mm can be used as well as lenses designed for 35mm cine film (35mm PL) and lenses designed for APS-C DSLR’s such as the Nikon DX series and Canon EF-S.

See also http://www.abelcine.com/fov/

 

Focal length conversion factor should apply to the camera not the lens.

I was asked in some post comments whether the a 50mm PL mount lens would give a wider picture than a 50mm DSLR lens. This confusion comes about I believe because of all the talk about focal length conversion factors. I don’t think this concept is well understood by some people as the implication is that somehow the lens is changing when its used on different cameras, when in fact it’s the camera that is different, not the lens.

It is important to understand that a 50mm lens will always be a 50mm lens. That is it’s focal length. It is determined by the shape of the glass elements and no matter what camera you put it on it will still be a 50mm lens. A 50mm DSLR lens has the same focal length as a 50mm PL mount and as a 50mm 2/3″ broadcast lens. In addition the lens focuses a set distance behind the rear element, agin the distance between the rear element and where it focuses does not change when it’s put on different cameras, so an adapter or spacer must be used to keep the designed distance between the lens and sensor, this distance is called the “flange back”.

The key thing is that it’s not the lens or it’s focal length that changes when you swap between different cameras. It is the size of the sensor that changes.

Imagine a projector shining an image on a screen so that the picture fills the screen. The projector is our “lens”. Without changing anything on the projector what happens if you move the screen closer or further away from the projector? The image projected on the screen will go in and out of focus, so that’s not good, we must keep the projector to screen distance constant, just like the lens to sensor distance (flange back) for any given lens remains constant.

What happens if we make the screen smaller? Well the image remains the same size but we see less of it as some of the image falls of the edge of the screen. If our projected picture was that of a wide landscape then on the reduced screen size what would now be seen would not appear less wide as we are now only seeing the middle part of the picture. The width of the view would be decreased, in other words the FIELD OF VIEW HAS NARROWED. The focal length has not changed.

This is what is happening inside cameras with different size sensors, the lens isn’t changing, just how much of the lenses projected image is falling on or off the sensor.

So the multiplication factor should be considered more accurately as being applied to the camera, not the lens and the multiplication factor changes the field of view, not the focal length.

So whether it is a PL mount lens, a Nikon or Canon DSLR lens or a Fujinon video lens, if it’s a 50mm lens then it’s a 50mm lens and the focal length is the same for all. However the field of view (width and height of the viewed image) will depend on the size of the sensor. So a 50mm PL lens will give the same field of view as a 50mm DSLR lens (no matter what camera the lens was designed for) on the same video camera.

The only other thing to consider is that lenses are designed to work with certain sizes of sensor. A lens designed for a full frame 35mm sensor will completely cover that size of sensor as well as any sensor smaller than that. On the other hand a 2/3? broadcast lens will only cover a 2/3? sensor, so if you try to use it on a larger sensor the image will not fill the frame.
The sensors in the Sony F3 and FS100 are “Super 35mm”. That is about the same size as APS-C. So lenses designed for Full frame 35mm can be used as well as lenses designed for 35mm cine film (35mm PL) and lenses designed for APS-C DSLR’s such as the Nikon DX series and Canon EF-S.

See also http://www.abelcine.com/fov/