Category Archives: Uncategorized

Austin Texas, 3 days of workshops at Omega Broadcast.

I’m running 3 days of intensive workshops at Omega Broadcast in Austin Texas between December 7th and 10th. There are workshops for complete beginners through to experienced shooters covering all kinds of topics from getting in to professional video production through to advanced shooting techniques such as raw and log.

December 7th. From Hobbyist to Pro-Shooter, learn what it takes to turn your hobby into a profession.

You love shooting video for fun and now you are thinking of turning that into a business. What do you need to do, how do you start? Learn the techniques that help turn an amateur into a pro, how to shoot and prepare a show reel and how to pitch for work.

10am – 10.15: Introductions and course outline.

10.15 – 10.30: What makes a Pro and Pro? A brief discussion on what it is that makes you a professional and what you should consider before turning your hobby into a business.

10:30 – 10:45: Professional Approach: How to deal with the needs of a customer.

Coffee

11.00 – 11.30: Planning and preparation. Projects run smoother when properly planned. Simple guidelines for production planning.

11.30 – 12.30: Equipment choices: Buy or rent? What to buy. A look at the different types of cameras available today, what’s best for your business.

Lunch

13.30 – 14.30: Post Production: Pro’s and con’s of doing it all yourself, what software or equipment to get.

14.30 – 15.00: Copyright and Licences. What are your rights, what do you own. Buying and licensing music and stock footage for use in commercial productions.

Coffee

15.15 – 16.00: Showreels: How to put together a great showreel.

16.00 – 16.30: Selling yourself: The hard part, how to pitch for work and how to get your business off the ground.

16.30 -17.30: Budgets and Rates: How to calculate what to charge and what profit margins can you expect. Other ways to make money in between projects.

Monday, December 9th

Advanced Shooting Techniques for the Modern Filmmaker.

Discover how to spice up your video productions using clever but surprisingly easy shooting methods including time-lapse, slow motion, green screen and motion control. In the past these techniques were expensive and difficult. Today they are within almost everyone’s grasp. In this workshop you will learn how make the most of these exciting creative tools. We will spend time in the classroom and studio learning the principles behind these methods. In the evening we will put it all into practice with an evening shoot where we combine time-lapse, motion control and green screen to produce a cleverly composite scene.

Who is this workshop for: Anyone! You don’t need to have any previous film making experience to learn a great deal during this workshop. However some experience of basic video shooting or still photography is beneficial. We will provide a basic motion control rig and a selection of cameras, but if you have a DSLR or time-lapse capable camcorder you might want to bring that along.

This is going to be a very, very busy day with lots of exciting and interesting things to take a look at and learn about. I hope it will be a highly enjoyable day and the end shot we will walk away with should amaze your friends while still being something that you can do yourself with only basic tools.

10.00 – 10.30: Introductions and outline of the day ahead.

10.30 – 10.45: Understand how and when to use special effects shooting modes and when it might be better to do it in post.

10.45 – 11.15: How to set up a camera for a special effects shot. Picture profile and camera settings considerations.

11.15 – 11.30: Coffee.

11.30 – 12.45: Green screen – how to shoot and light perfect green screen, including shooting part of out final special effects composition.

12.45 – 13.30: Lunch.

13.30 – 14.45: Time-lapse – how to speed up time, different techniques with video cameras and DSLR’s.

14.45 – 15.00: Coffee.

15.00 – 16.00: Slow motion – how to shoot slow mo with a high speed camera. What lights can you use and how many will you need?

16.00 – 17.00: Motion Control – A basic introduction to motion control and how you can combine it with time-lapse and green-screen.

17.00 – 19.00: Break for you to have diner.

19.00 – 20.30: Putting it all together – combining time-lapse, green-screen and motion control into a very clever special effects shot.

Tuesday, December 10th

Modern Digital Cinematography Techniques.

This workshop is for videographers, digital imaging technicians and cinematographers that are interested in learning more about the latest camera technologies. Learn about shooting using Log gamma, raw and 4K. What are the differences between conventional gammas, log and raw and how does it affect the entire production. Discover how to work with LUT’s (look Up Tables) and “Looks”. Find out how to correctly use the ACES workflow (Academy Color Encoding System) to gain consistency between different cameras and standardize your workflow. Learn how to safely manage the large amounts of data that can be generated by a modern digital cinema camera and then how to grade the footage using DaVinci Resolve.

Who is this workshop for: Intermediate to advanced content creators!

10.00 – 10.30: Introductions and course outline.

10.30 – 11.00: What is gamma, why do we use it and what are it’s limitations. Standard gamma curves and the knee.

11.00 – 11.15: What’s the difference between latitude and dynamic range.

11.15 – 11.30: Coffee.

11.30 – 12.00: Advanced gamma curves, what do they do and why should I choose them?

12.00 – 12.30: Log gamma, what does it add, the pro’s and con’s and when and how do I use it. Correct log exposure.

12.30 – 13.00: Lunch.

13.00 – 13.30: Display referenced and scene referenced, what does this mean and why is it important in modern workflows?

13.30 – 13.45: Raw, what is raw and what are it’s benefits.

13.45 – 14.00: How to expose when shooting raw. Understanding EI gain and latitude control.

14.00 – 14.30: Look Up Tables. 1D and 3D Luts, how to use them, how to create them.

14.30 – 14.45: ACES, an introduction to the principles of the Academy ACES workflow.

14.45 – 15.00: Coffee.

15.00 – 15.45: DaVinci Resolve, an introduction to grading with DaVinci Resolve.

15.45 – 16.30: Hands on session putting it all together. Your chance to try different gammas or raw.

16.30 – 17.00: Q&A. Anything you didn’t quite get or want to know more about? Now’s your chance to ask.

 

Important note. The above is an outline of how the day should run. However depending on the group, the timing and running order may change to fit the skills and abilities of the group. Nothing will be missed, every effort will be made to tailor the day to fit the attendees needs rather than watching the clock and sticking to a rigid schedule. It is not unusual for the running order to change a little as students explore some of the topics covered in more depth and others perhaps a little less.

Please contact Omega to book a place. I look forward to working with you!

 

Advanced Media Dubai FS700 Workshop. 22nd 23rd November.

Just a reminder that I’m running a workshop in Dubai on the FS700 on the 22/23rd of November.  The outline agenda for each day is as follows:

FS700 4K Raw  Workshop:

•           Introduction to 4K, what is 4K and what benefits can it bring, even for HD production.

•           The difference between conventional shooting and using raw.

•           Introduction to NEX-FS700RH

•           FS700 4K Raw setup

•           Raw workflow considerations and overview.

•           Managing and monitoring high dynamic range images.

•           Using Picture Profiles and Look Up Tables for monitoring (LUT’s).

•           Practical: Correct S-Log2 exposure.

•           Practical: Pushing the camera to its limits, discovering how far you can push your exposure.

•           Practical: Shooting in 4K, re-framing in post-production.

•           The future of TV, Internet and web delivery and the importance of 4K acquisition.

•           Introduction to PXW-Z100

•           Q&A.

For more information please contact Advanced Media in Dubai: http://www.amt.tv/event/FS700-Workshop/

DaVinci Resolve 10 released. Lite includes UHD resolution.

It’s been in Beta for a while now, but now the release versions of Black Magic Designs colour grading tool is available to download in both the full paid and free lite versions. The Lite version now allows you to export at up to UHD resolution, so even those shooting 4K are going to be able to deliver at better than HD resolution. One of the best new features in Resolve 10 is a more refined and comprehensive set of editing tools including a title generator. For full details take a look a the BlackMagicDesign web site.

It’s all in the grade!

So I spent much of last week shooting a short film and commercial (more about the shoot in a separate article). It was shot in raw using my Sony F5, but could have been shot with a wide range of cameras. The “look” for this production is very specific. Much of it set late in the day or in the evening and requiring a gentle romantic look.

In the past much of this look would have been created in camera. Shooting with a soft filter for the romantic look, shifting the white balance with warming cards or a dialled in white balance for a warm golden hour evening look. Perhaps a custom picture profile or scene file to alter the look of the image coming from the camera. These methods are still very valid, but thanks to better recording codecs and lower cost grading and post production tools, these days it’s often easier to create the look in post production.

When you look around on YouTube or Vimeo at most of the showreels and demo reels from people like me they will almost always have been graded. Grading is a huge, modern,  part of the finishing process and it makes a huge difference to the final look of a production. So don’t automatically assume everything you see on-line looked like that when it was shot. It probably didn’t and a very, very big part of the look tends to be created in post these days.

One further way to work is to go half way to your finished look in camera and then finish off the look in post. For some productions this is a valid approach, but it comes with some risks and there are some things that once burnt into the recording can be hard to then subsequently change in post, for example any in camera sharpening is difficult to remove in post as are crushed blacks or skewed or offset white balance.

Also understand that there is a big difference between trying to grade using the color correction tools in an edit suite and using a dedicated editing package. For many, many years I used to grade using my editing software, simply because that was what I had. Plug-ins such as Magic Bullet looks are great and offer a quick and effective way to get a range of looks, but while you can do a lot with a typical edit color corrector, it pales into insignificance compared to what can be done with a dedicated grading tool, for example not only creating a look but then adjusting individual elements of the image.

When it comes to grading tools then DaVinci Resolve is probably the one that most people have heard of. Resolve Lite is free, yet still incredibly capable (provided you have a computer that will run it). There are lots of other options too like Adobe Speed Grade, but the key thing is that if you change your workflow to include lots of grading, then you need to change the way you shoot too. If you have never used a proper grading tool then I urge you to learn how to use one. As processing power improves and these tools become more and more powerful they will play an ever greater role in video production.

So how should you shoot for a production that will be graded? I’m sure you will have come across the term “shoot flat” and this is often said to be the way you should shoot when you’re going to grade. Well, yes and no. It depends on the camera you are using, the codec, noise levels and many other factors. If you are the DP, Cinematographer or DiT, then it’s your job to know how footage from your camera will behave in post production so that you can provide the best possible blank canvas for the colourist.

What is shooting flat exactly? Lets say your monitor is a typical LCD monitor. It will be able to show 6 or 7 stops of dynamic range. Black at stop 0 will appear to be black and whites at stop 7 will appear bright white. If your camera has a 7 stop range then the blacks and whites from the camera will be mapped 1:1 with the monitor and the picture will have normal contrast. But what happens when you then have a camera that can capture double that range, say 12 to 14 stops?. The bright whites captured by the camera will be significantly brighter than before. If you then take that image and try to show it on the same LCD monitor you have an issue because the LCD cannot go any brighter than before, so the much brighter whites from the high dynamic range shot are shown at the same brightness as the original low dynamic range shot. Not only that but the now larger tonal range is now squashed together into the monitors limited range. This reduces the contrast in the viewed image and as a result it looks flat.

That’s a real “shoot flat” image (a wide dynamic range shown on a typical dynamic range monitor), but you have to be careful because you can also create a flat looking image by raising the cameras black level or black gamma or reducing the white level. Doing this reduces the contrast in the shadows and mid tones and will make the pictures look low contrast and flat. But raising the black level or black gamma or reducing the white point rarely increases the dynamic range of a camera, most cameras dynamic range is limited by the way they handle highlights and over exposure, not shadows, dark or white level. So just beware, not all flat looking images bring real post production advantages, I’ve seen many examples of special “flat” picture profiles or scene files that don’t actually add anything to the captured image, it’s all about dynamic range, not contrast range. See this article for more in depth info on shooting flat.

If you’re shooting for grading, shooting flat with a camera with a genuinely large dynamic range is often beneficial as you provide the colourist with a broader dynamic range image that he/she/you can then manipulate so that it looks good on typically small dynamic range TV’s and monitors, but excessively raising the black level or black gamma rarely helps the colourist as this just introduces an area that will need to be corrected to restore good contrast rather than adding anything new or useful to the image. You also need to consider that it’s all very well shooting with a camera that can capture a massive dynamic range, but as there is no way to ever show that full range, compromises must be made in the grade so that the picture looks nice. An example of this would be a very bright sky. In order to show the clouds in the sky the rest of the scene may need to be darkened as the sky is always brighter than everything else in the real world. This might mean the mid tones have to be rather dark in order to preserve the sky. The other option would be to blow the sky out in the grade to get a brighter mid range. Either way, we don’t have a way of showing the 14 stop range available from cameras like the F5/F55 with current display technologies, so a compromise has to be made in post and this should be in the back of your mind when shooting scenes with large dynamic ranges. With a low dynamic range camera, you the camera operator would choose whether to let the highlights over expose to preserve the mid range or whether to protect the highlights and put up with a darker mid range. But now with these high dynamic range cameras that decision is largely moved to post production, but you should still be looking at your mid tones and if needed adding a bit of extra illumination so that the mids are not fighting the highlights.

In addition to shooting flat there is a lot of talk about using log gamma curves, S-Log, S-log2, LogC etc. Again IF the camera and recording codec are optimised for Log then this can be an extremely good approach. Remember that if you choose to use a log gamma curve then you will also need to adjust the way you expose to place skin tones etc in the correct part of the log curve. It’s no longer about exposing for what looks good on the monitor or in the viewfinder, but about exposing the appropriate shades in the correct part of the log curve.  I’ve written many articles on this so I’m not going to go into it here, other than to say log is not a magic fix for great results and log needs a 10 bit codec if your going to use it properly. See these articles on Log: S-Log and 8 bit  or Correct Exposure with Log. Using Log does allow you to capture the cameras full range, it will give you a flat looking image and when used correctly it will give the colourist a large blank canvas to play with. When using log it is vital that you use a proper grading tool that will apply log based corrections to your footage as adding linear corrections in a typical edit application to log footage will not give the best results.

So what if your camera doesn’t have log? What can you do to help improve the way the image looks after post production? First of all get your exposure right. Don’t over expose. Anything that clips can not be recovered in post. Something that’s a little too dark can easily be brightened a bit, but if it’s clipped it’s gone for good. So watch those highlights. Don’t under expose,  just expose correctly. If you’re having a problem with a bright sky don’t be tempted to add a strong graduated filter to the camera to darken the sky. If the colorist tries to adjust the contrast of the image the grad may become more extreme and objectionable. It’s better to use a reflector or some lights to raise the foreground rather than a graduated filter to lower the highlight.

One thing that can cause grading problems is any knee compression. Most video cameras by default use something called the “Knee” to compress highlights. This does give the camera the ability to capture a greater dynamic range, but this is done by aggressively compressing together the highlights and it’s either on or off. If the light changes during the shot and the cameras knee is set to auto (as most are by default) then the highlight compression will change mid shot and this can be a nightmare to grade. So instead of using the cameras default knee settings use a scene file or picture profile to set the knee to manual or use an extended range gamma curve like a Hypergamma or Cinegamma that does not have a knee and instead uses a progressive type of highlight compression.

Another thing that can become an issue in the grading suite is image sharpening. In camera sharpening such as detail correction works by boosting contrast around edges. So if you take an already sharpened image into the grading suite and then boost the contrast in post, the sharpening will become more visible and the pictures may take on more of a video look or become over sharpened. It’s just about impossible to remove image sharpening in post, but to add a bit of sharpening is quite easy. So, if you’re shooting for post consider either turning off the detail correction circuits all together or at the very least reduce the levels applied by a decent amount.

Color and white balance: One thing that helps keep things simple in the grade is having a consistent image. The last thing you want is the white balance changing half way through the shot, so as a minimum use a fixed white balance or preset white balance. I find it better to shoot with preset white when shooting for a post heavy workflow as even if the light changes a little from scene to scene or shot to shot the RGB gain levels remain the same so any corrections applied have a similar effect, the colourist then just tweaks the shots for any white balance differences. It’s also normally easier to swing the white balance in post if preset is used as there won’t be any odd shifts added as can sometimes happen if you have used a grey/white card to white balance.

Just as the brightness or luma of an image can clip if over exposed then so too can the colour. If you’re shooting colourful scenes, especially shows or events with coloured lights then it will help you if you reduce the saturation of the colour matrix by around 20%, this allows you to record stronger colours before they clip. Colour can then be added back in in the grade if needed.

Noise and grain: This is very important. The one thing above all the others that will limit how far you can push your image in post is noise and grain. There are two sources of this, camera noise and compression noise. Camera noise is dependant on the cameras gain and chosen gamma curve. Aways strive to use as little gain as possible, remember that if the image is just a little dark you can always add gain in post, so don’t go adding un-necessary gain in camera. A proper grading suite will have powerful noise reduction tools and these normally work best if the original footage is noise free and then gain added in post, rather than trying to de-noise grainy camera clips.

The other source of noise and grain is compression noise. Generally speaking, the more highly compressed the video stream is then the greater the noise will be. Compression noise is often more problematic than camera noise as in many cases it will have a regular pattern or structure which makes it visually more distracting than random camera noise. More often than not the banding seen in images across the sky or flat surfaces is caused by compression artefacts rather than anything else and during grading any artefacts such as these can become more visible. So try to use as little compression as possible, this may mean using an external recorder but these can be purchased or hired quite cheaply these days. As always, before a big production test your workflow. Shoot some sample footage, grade it and see what it looks like. If you have a banding problem, suspect the codec or compression ratio first, not whether it’s 8 bit or 10 bit, in practice it’s not 8 bit that causes banding, but too much or poor quality compression (so even a camera with only an 8 bit output like the FS700 will benefit from recording on a better quality external recorder).

RAW: Of course the best way of providing the colourist (even if that’s yourself) the best blank canvas is to shoot with a camera that can record the raw sensor data. By shooting raw you do not add any in camera sharpening or gamma curves that may then need to be removed in post. In addition raw normally means capturing the cameras full dynamic range. But that’s not possible for everyone and generally involves working with very large amounts of data. If you follow my guidelines above you should at least have material that will allow you a good range of adjustment and fine tuning in post. This isn’t “fix it in post”, we are not putting right something that is wrong. We are shooting in a way that allows us to make use of the incredible processing power available in a modern computer to produce great looking images. You are making those last adjustments that make a picture look great using a nice big monitor (hopefully calibrated) in a normally more relaxed environment than on most shoots.

The way videos are produced is changing. Heavy duty grading used to be reserved for high end productions, drama and movies. But now it is common place, faster and easier than ever. Of course there are still many applications where there isn’t the time for grading, such as TV news, but grading is going to play an every greater part in more and more productions, so it’s worth learning how to do it properly and how to adjust your shooting setup and style to maximise the quality of the finished production.

Workshops and events in November/December.

Here’s a list of workshops and events that I’m involved in in the coming months:

Bucharest, Romainia, O-Video are opening a new centre and will be launching this with open days including seminars on the FS700, F5 and F55 on the 5/6/7th of November.
Advanced Media Dubai, FS700 and hopefully Convergent Design Odyssey 7Q 22/23rd November.

Tallin Estonia, FS700 workshop, 28th November, location TBA.

New York, F5/F55 2 hour evening seminar, Sony Cine Alta Forum , NYC 4th December.
Beginner, Intermediate and Advanced video production skills workshops at Omega Broadcast, Austin Texas, 7,9,10th December. great opportunity to come and improve your video skills whatever you experience levels. This was a fantastic event last year and this year should be even better. Fun, educational, inspirational.

Anyone else in USA looking for training early December? I’ll be in the US so if you or your company would like me to put on some training or an event please let me know asap.

Understanding the difference between Display Referenced and Scene Referenced.

This is really useful! Understand this and it will help you understand a lot more about gamma curves, log curves and raw. Even if you don’t shoot raw, understanding this can be very helpful in working out differences in how we see the world, the way the world really is and how a video camera see’s the world.

So first of all what is “Display Referenced”? As the name of the term implies this is all about how an image is displayed. The vast majority of gamma curves are display referenced. Most cameras are setup based on what the pictures look like on a monitor or TV, this is display referenced. It’s all about producing a picture that looks nice when it is displayed. Most cameras and monitors produce pictures that look nice by mimicking the way or own visual system works, that’s why the pictures look good.

Kodak Grey Card Plus.
Kodak Grey Card Plus.

If you’ve never used a grey card it really is worth getting one as well as a black and white card. One of the most commonly available grey cards is the Kodak 18% grey card. Look at the image of the Kodak Grey Card Plus shown here. You can see a white bar at the top, a grey middle and a black bar at the bottom.

What do you see? If your monitor is correctly calibrated the grey patch should look like it’s half way between white and black. But this “middle” grey is also known as 18% grey because it only actually reflects 18% of the light falling on it. A white card will reflect 90% of the light falling on it. If we assume black is black then you would think that a card reflecting only 18% of the light falling on it would look closer to black than white, but it doesn’t, it looks half way between the two. This is because our own visual system is tuned to shadows and the mid range and tends to ignore highlights and brighter parts of the scenes we are looking at. As a result we perceive shadows and dark objects as brighter than they actually are. Maybe this is because in the past the things that used to want to eat us lurked in the shadows, or simply because faces are more important to us than the sky and clouds.

To compensate for this, right now your monitor is only using 18% of it’s brightness range to show shades and hues that appear to be half way between black and white. This is part of the gamma process that makes images on screens look natural and this is “display referenced”

When we expose a video camera using a display referenced gamma curve (Rec-709 is display referenced) and a grey card, we would normally set the exposure level of the grey card at around 40-45%. It’s not normally 50% because a white card will reflect 90% of the light falling on it and half way between black and the white card will be about 45%.

We do this for a couple of reasons. In older analog recording and broadcasting systems the signal is nosier when closer to black, if we recorded 18% grey at 18% it would be possibly be very noisy. Most scenes contain lots of shadows and objects less bright than white, so recording these at a higher level provides a less noisy picture and allows us to use more bandwidth for those all important shadow areas. When the recording is then displayed on a TV or monitor the levels are then adjusted by the monitors gamma curve so that the brightness levels are such that mid-tones appear as just that, mid tones.

So that middle grey recorded at 45% is getting reduced back down so that the display outputs 18% of its available brightness range and thus to us humans it appears to be half way between black and white.

So are you still with me? All the above is “Display Referenced”, it’s all about how it looks.

So what is “Scene Referenced”?

Think about our middle grey grey card again. It reflects only 18% of the light that falls on it, yet appears to be half way between black and white. How do we know this? Well because someone has used a light meter to measure it. A light meter is a device that captures photons of light and from that produces an electrical signal to drive a meter. What is a video camera? Every pixel in a video camera is a microscopic light meter that turns electrons of light into and electrical signal. So a video camera is in effect a very sophisticated light meter.

Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.
Un graded raw shot of a bike in Singapore, this is scene referred as it shows the scene as it actually is.

If we remove the cameras gamma curve and just record the data coming off the sensor we are recording a measurement of the true light coming from the scene just as it is. Sony’s F5, F55 and F65 cameras record the raw sensor data with no gamma curve, this is linear raw data, so it’s a true representation of the actual light levels in the scene. This is “Scene Referred”. It’s not about how the picture looks, but recording the actual light levels in the scene. So a camera shooting “Scene Referred” will record the light coming off an 18% grey card at 18%.

If we do nothing else to that scene referred image and then show it on a monitor with a conventional gamma curve, that 18% grey level would be taken down in level by the gamma curve and as a result look almost totally black (remember in Display referenced we record middle grey at 45% and then the gamma curve corrects the monitor output down to provide correct brightness so that we perceive it to be half way between black and white).

This means that we cannot simply take a scene referenced shot and show it on a display referenced monitor. To get from Scene Referenced to Display Referenced we have to add a gamma curve to the Scene Referenced footage. When your working with linear raw this is normally done on the fly in the editing or grading software, so it’s very rare to actually see the scene referenced footage as it really is. The big advantage of using scene referenced material is that because we have recorded the scene as it actually is, any grading we do will not have to deal with the distortions that a gamma curve adds. Grading correction behave in a much more natural and realistic manner. The down side is that as we don’t have a gamma curve to help shift our recording levels into a more manageable range we need to use a lot more data to record the scene accurately.

The Academy ACES workflow is based around using scene referenced material rather than display referenced. One of the ideas behind this is that scene referenced cameras from different manufacturers should all look the same. There is no artistic interpretation of the scene via a gamma curve. A scene referenced camera should be “measuring” and recording the scene how it actually is so it shouldn’t matter who makes it, they should all be recording the same thing. Of course in reality life is not that simple. Differences in the color filters, pixel design etc means that there are differences, but by using scene referred you eliminate the gamma curve and as a result a grade you apply to one camera will look very similar when applied to another, making it easier to mix multiple cameras within your workflow.

 

Shooting the Shwe Dagon Pagoda in 4K raw (or how to edit and grade on a laptop).

I was recently invited to talk about 4K at a Sony event in Myanmar. Rather than just standing up and talking I always like to use practical demonstrations of the things I am talking about. So for this particular workshop I decided to go to one of the local landmarks the day before the event, shoot it in 4K, edit that footage and then grade it in order to produce a short 4K film. The object being to prove that Sony’s 4K raw is not something to be afraid of. It’s actually quite manageable to work with, even with just a laptop.

Having just flown in to Myanmar from a workshop in Vietnam, I was travelling light in order to keep my excess baggage charges to a minimum and to avoid too much aggravation at customs. In total I had about 35kg of luggage including enough clothes for two weeks on the road.

My very minimal equipment for this mini project comprised of a Sony PMW-F5 camera with AXS-R5 recorder. I used an MTF FZ to Nikon lens adapter and a Sigma 24-70mm f2.8 DSLR lens. The tripod was the excellent Miller Solo with a Compass 15 head. Power came from a couple of Lith 150Wh batteries. A really basic shooting kit but one that can produce remarkably good results. The weakest part of the kit is the lens, I really could have done with a wider lens and the Sigma is prone to flare, so I’m open to suggestions for a better budget zoom lens.

The shoot was surprisingly easy. I’ve heard many stories of Myanmar (Burma) being a closed country, but I had no issues shooting at the temple or around the city of Yangon other than curious onlookers as a large camera like the F5 is a rare sight for the locals.

I shot in 4K raw, I love the post production flexibility that the raw footage brings. In order to keep image noise to a minimum and also to keep exposure easy I used a MLUT  (Lut 2) and 640 EI ISO. I know that when I shoot at 640 EI and use 100% zebras that I can expose nice and bright in the viewfinder and just keep an eye out for zebras just starting to appear. With the F5 at 640 EI, 100% zebras will show just a little before clipping, so as long as you only have the very tiniest amount of zebra on your brightest highlights your exposure will be fine, nice and bright but not clipped.

During the day I spent a couple of hours at the temple and then another hour at the temple in the evening. In the YouTube video you will also see a time-lapse shot. This was done after the workshop and was not included in the original edit.

Once back at the hotel the first stage was to transfer everything from the AXS card to a hard drive. For my travel shoots I use Seagate 2TB USB 3 drives. These are 3.5″ drives so require mains power, 2.5″ drives are not really fast enough for 4K raw editing. My hotel bedroom had one power socket on one side of the room and another on the other side of the room. So with the AXS-CR1 card reader plugged into one and the hard drive into the other I had to sit on the floor in the middle of the room with my laptop running on it’s battery while I transferred the files, about an hours worth of clips. This took about 40 minutes.

Once the clips were on the hard drive I could begin the edit. I could have used the XDCAM HD files from the camera as proxies for the edit, but I find it just as easy to use the raw files. My laptop is an off-the-shelf 15″ Retina MacBook Pro with 8GB of ram. I use Adobe Premiere CC with Sony’s raw plugin for the edit. One thing I have found necessary is to re-boot the computer before editing the raw files. I find Premiere more stable if I do this.

Set playback resolution to 1/4
Set playback resolution to 1/4

To edit Sony’s 4K raw I use one of the Sony 4K raw presets that get installed when you add the Sony raw plugin. The other thing I have to do is to drop the resolution of the clip viewer and timeline viewer windows to 1/4. This really isn’t a big deal as 1/4 of 4K is HD and when just using the laptops screen I’m not viewing the small viewer windows at a very high resolution anyway. Editing the 4K raw is smooth and painless. Dissolves and effects can be a little jumpy as you try to pull 2 streams of 4K of the single hard drive, but for cuts only or a simple edit it’s really not a problem.

Once I’m happy with the picture cut I export an AAF file from Adobe Premiere. I then close Premiere and start DaVinci Resolve. I use the full paid version as I often want to export in 4K. The free Lite version will happily edit and grade Sony’s 4K raw, but you can only export at up to HD resolution.

Initially I set my project setting to HD as this gives smoother playback. I then open the AAF file that I saved in Premiere. Resolve will ask for a location to search for the clips, so just navigate to the parent folder of the directory where your clips are stored and click “search”. After a short wait your Premiere edit will open in a timeline in Resolve. Now you can go to the “Color” room in Resolve to grade your footage. If your using a low power system like a laptop you may want to go to the project settings and under the raw settings, choose “Sony Raw” and set the De-Bayer to half or quarter. This will help make playback smoother and faster but sacrifices a little image quality. Don’t worry though, we can force Resolve to do a full resolution De-Bayer when we are ready to export the graded clips.

I’m not going to teach you how to grade here. I’m not a colourist, fortunately resolve is pretty straight forward and I can now quickly create a look, save that look and apply it to multiple clips and then go back and tweak and refine the grade where needed, perhaps adding secondary corrections here and there. For the Shwe Dagon video there were only a couple of shots where I used secondaries, these included shots with dark interiors. The overall grade was pretty straight forward.

Once I was happy with the look of the shots I went to the project settings and changed the project resolution back to 4K.  I then used the “deliver” room in Resolve to export the clips. To keep life simple I exported the grade as individual clips with the same file names as the original clips using 4K ProRes HQ to a new folder on my USB 3 hard drive. I also check the “force full resolution debayer” check box to make sure that the quality of the renders is as good as it can be. Rendering the files from Resolve on my MacBook is not a real time process. I get around 5 frames per second, so a minute of footage takes about 5 or 6 minutes. The Shwe Dagon video is a little over 4 minutes so rendering out the graded shots took about half an hour.

Once the render in resolve is completed I then exit Resolve and go back to Premiere. In between I re-boot the laptop. Back in the original edit project I simply import the resolve render files and swap the raw clips in the timeline with the graded clips. I then add any titles or other effects in Premiere before finally exporting the finished piece in the codecs I need using Adobe’s media encoder. For YouTube I export the clips as 4K .mp4 files with a bit rate of 50-75Mb/s.

It really is possible to edit and grade Sony’s 4K raw on a laptop. It’s not particularly painful to do. I wouldn’t want to do a long or complex project this way, but for short simple projects it’s really not a big deal. If you get a BlackMagic thunderbolt MiniMonitor box you can use any HDMI equipped TV as an external monitor. Sony’s 4K raw is easy to work with, the biggest headache is simply the size of the files. At 500GB per hour at 24/25fps there’s a lot of data to manage, but this is no more than uncompressed RGB HD. In the office I have a workstation with a pair of NVIDIA GTX570 graphics cards, these graphics cards give me enough video processing power to work with 4K raw at full resolution in real time.

 

 

PMW-F5 and F55 firmware released. 4K shooting in Myanmar, Saigon Film School.

I’m currently sitting in the airport lounge at Bangkok airport, on my way to Taiwan for a training event tomorrow (teaching local camera operators to become trainers). So I thought I’d take a few minutes to catch up on things.

The big news is the release of the firmware version 2.0 for Sony’s F5 and F55 cameras. All I can say is wow! A huge number of new features, way to many to list here. Of course the biggie is 240fps super slow-mo 2K raw. Also we get XAVC high speed at up to 120fps (eventually this will go up to 180 fps). There’s the ability to use XQD media which is a fair bit cheaper than SxS and a great new focus tool. The focus assist mode provides you with a “sharpness” bar graph that you can use to check the focus of objects in the center of the field of view. It’s much more precise than peaking and a really great tool to have on a 4K camera. For exposure there is now a very clear waveform monitor display as well as a histogram. If you have the OLED EVF then there is also the addition of false color (although the EVF has to go back to Sony for an update to enable this).

The Audio control button for the side LCD now works and you get easy, direct access to all the major audio functions. In addition you can now change the EI gain from the side LCD. All 4 HDSDI outs now work together giving you two clean HDSDI outs plus two with overlays. Furthermore S-Log2 has been added as a new Look Up Table when shootin in EI mode (don’t forget  you also get S-Log2 out of the AUX HDSDI on the R5 anyway, one way of having 709 + S-Log together). You can also now get standard definition out of SDI 3/4 and the Test out.

For the DiT’s out there there is now a user gamma page where you can roll your own gamma curve, although I have not had time to play with this function yet.

All in all this is a massive update for these cameras and really transforms the camera (not that is was bad before hand). All I need now is to get the new 2K Optical Low Pass Filter. You can download the update files from here:

PMW-F55 V2 Update.

PMW-F5 V2 Update.

AXS-R5 V2 Update.

Do note that you must do an “All Reset” immediately after the update and this update cannot be rolled back!

One of the many temples at the Shwe Dagon Pagoda, Myanmar. This is a 4K frame grab.
One of the many temples at the Shwe Dagon Pagoda, Myanmar. This is a 4K frame grab.

Last week I was in Yangon, Myanmar running some short half day workshops on Sony cameras. We had about 200 people through the workshops over a couple of days. In between I was able to go out and shoot the Shwe Dagon Pagoda in 4K. It’s a beautiful place, covered in gold bars that sparkle in the sun and diamonds that refract the light from the lights at night into a myriad of colors. I did a quick edit in Myanmar to show at the workshops, but as soon as I get home I’ll finish it off and get it up on to YouTube in 4K. It looks fantastic. A big thank you from me to the team at TMW Enterprises for looking after me so well.

Before that I was in Vietnam running a 3 day workshop for Saigon Film School. What a great bunch. Had a really good time and the 3 short films that the students produced over the course of the workshop all look great. Nice to see such enthusiasm and nice to see many of the techniques I taught put to good use in the films. I hope to go back in the new year to do something even bigger, but only if the faculty staff promise not to get me quite so drunk with “bottoms up” giant glasses of wine at the after school party.

Coming up: I’m preparing some articles on the difference between “Scene Referred” and “Display Referred” shooting and workflows. This is very relevant to anyone shooting raw or looking at implementing an ACES workflow. Also a back to basics tutorial on getting the very best from a lens, whether thats a built in zoom or a prime lens. These should go online in the next week. After that I have a big TV commercial shoot.

PMW-F5 and PMW-F55 Version 2 firmware update.

Just a brief update from IBC. Firmware version 2.0 will be released at the end of September and will include the high speed raw shooting modes, 2K at up to 240fps. In addition there will be the addition of waveform, vectorscope and histogram.

The Audio button and File bottom on the side panel will be become active. The audio button adding 2 pages of Audio control functions including quick switching for each channel between manual and auto as well as manual level control via the large menu knob. The file button gives quick access to load and save a number of “all files” so you can quickly switch between different camera setups.

Further new features are full support for the Fujinon Cabrio lens (the lens will need a firmware update too) with zoom control and rec start/stop. Support for Sony’s new LA-FZB1 (5000 Euro approx) and FZB2 (9000 Euro approx) lens adapters for B4 lenses.

This is a very significant update for the cameras and includes a lot of other smaller new features.

When shooting 2K the camera uses the full sensor, but it is read in a different way, one that create larger “virtual” pixels (my words, not Sony’s). This means that as the sensor is now operating as a 2K sensor that the factory fitted 4K optical low pass filter (OLPF) is not optimum for controlling aliasing and moire. Sony will be offering (for sale) a drop in replacement 2K OLPF. The 2K OLPF will control aliasing and moire at 2K as well as providing a softer look at 4K for those wanting this. It is almost essential for the 2K high speed modes and gives a smoother look at 4K that works well for beauty, cosmetic, period drama and similar projects. I could not get a price for the 2K OLPF but I have been assured that it will be affordable (even for me and my small budget).

Version 3 Firmware will be released at the end of year, probably a little after Christmas. Version 3 will add the compressed XAVC high speed modes as well as a new feature which is a 2K super 16mm crop mode. The S16 crop mode will allow you to use S16 PL mount lenses or B4 zoom lenses using the MTF FZ-B4 adapter without the 2x extender and only a 0.3 stop light loss. Also included will be AES/EBU digital audio and other features not listed here.