It can get very confusing when trying to compare deferent lenses on different sized sensors as the size of the sensor determines the Field of View. Even if you know the multiplication factors that you need to apply to a lens on a camera with one size sensor to get the equivalent for another, it’s still hard to visualise. Thankfully Andy Shipsides over at Able Cine has come up with a great web page that allows you to see how different scenes will look on different cameras with different focal lengths. It’s a really useful page to add to your bookmarks:
http://www.abelcine.com/fov/
Category Archives: Uncategorized
I’m Back…. well a little bit
Hello and welcome back!
Click Here to go to the forum.
As you may be aware this domain and most of my other web sites have been unavailable for some time. This was caused by the total failure of the server my domains resided on. This resulted in the loss of all my data, including my backups. The hosting company was supposed to have been backing up my domains as part of the hosting package, however they had moved my sites to a special server for large domains and I was not aware of this. As a result no backups were kept by the hosting company, again something I was unaware of. I even had a mirror of this domain for safety, but that was lost as well.
Anyway.. long story short.. I’ve lost 3 years of hard work. The moral of this sorry tale is don’t take your web hosts claims of “free backups” for granted. Do your own backups and keep them safe.
Over the next few weeks I will try to salvage what I can from this mess. Even this is taking time as the hosting company has left my domains in such a mess that un-picking them is proving difficult and I seem to be spending more time on the support line than anything else.
Thanks for your support. I hope to get all the key information that used to be on the site back online some time in the future.
This will mean a new layout to the site with feature pages and documentation, which I hope will make it easier to find the things you are looking for. If there are any particular articles that you would like me to try to find or re-write please please post a comment on my blog and I’ll try and find it!
If you were a forum member, you will have to re-register. Please, please post away in the forum. Lets get things back up to speed.
When should I use a Cinegamma or Hypergamma?
Cinegammas are designed to be graded. The shape of the curve with steadily increasing compression from around 65-70% upwards tends to lead to a flat looking image, but maximises the cameras latitude (although similar can be achieved with a standard gamma and careful knee setting). The beauty of the cinegammas is that the gentle onset of the highlight compression means that grading will be able to extract a more natural image from the highlights. Note than Cinegamma 2 is broadcast safe and has slightly reduced lattitude than CG 1,3 and 4.
Standard gammas will give a more natural looking picture right up to the point where the knee kicks in. From there up the signal is heavily compressed, so trying to extract subtle textures from highlights in post is difficult. The issue with standard gammas and the knee is that the image is either heavily compressed or not, there’s no middle ground.
In a perfect world you would control your lighting (turning down the sun if necessary ;-o) so that you could use standard gamma 3 (ITU 709 standard HD gamma) with no knee. Everything would be linear and nothing blown out. This would equate to a roughly 7 stop range. This nice linear signal would grade very well and give you a fantastic result. Careful use of graduated filters or studio lighting might still allow you to do this, but the real world is rarely restricted to a 7 stop brightness range. So we must use the knee or Cinegamma to prevent our highlights from looking ugly.
If you are committed to a workflow that will include grading, then Cinegammas are best. If you use them be very careful with your exposure, you don’t want to overexpose, especially where faces are involved. getting the exposure just right with cinegammas is harder than with standard gammas. If anything err on the side of caution and come down 1/2 a stop.
If your workflow might not include grading then stick to the standard gammas. They are a little more tolerant of slight over exposure because skin and foliage won’t get compressed until it gets up to the 80% mark (depending on your knee setting). Plus the image looks nicer straight out of the camera as the cameras gamma should be a close match to the monitors gamma.
Understanding Gamma, Cinegamma, Hypergamma and S-Log
The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gamma curves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gamma curves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gamma curves come in to play.
The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.
Un-natural image response area
Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.
S-Log Gamma Curve
Taking things a step further this even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.
Red area indicates where image will not look natural with S-Log without LUT
In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.
Understanding Gamma, Cinegamma, Hypergamma and S-Log
The graph to the left shows and idealised, normal gamma curve for a video production chain. The main thing to observe is that the curve is in fact pretty close to a straight line (actual gamma curves are very gentle, slight curves). This is important as what that means is that when the filmed scene gets twice as bright the output shown on the display also appears twice as bright, so the image we see on the display looks natural and normal. This is the type of gamma curve that would often be referred to as a standard gamma and it is very much what you see is what you get. In reality there are small variations of these standard gamma curves designed to suit different television standards, but those slight variations only make a small difference to the final viewed image. Standard gammas are typically restricted to around a 7 stop exposure range. These days this limited range is not so much to do with the lattitude of the camera but by the inability of most monitors and TV display systems to accurately reproduce more than a 7 stop range and to ensure that all viewers whether they have 20 year old TV or an ultra modern display get a sensible looking picture. This means that we have a problem. Modern cameras can capture great brightness ranges, helping the video maker or cinematographer capture high contrast scenes, but simply taking a 12 stop scene and showing it on a 7 stop display isn’t going to work. This is where modified gamma curves come in to play.
The second graph here shows a modified type of gamma curve. This is similar to the hypergamma or cinegamma curves found on many professional camcorders. What does the graph tell us? Well first of all we can see that the range of brightness or lattitude is greater as the curve extends out towards a range of 10 T stops compared to the 7 stops the standard gamma offers. Each additional stop is a doubling of lattitude. This means that a camera set up with this type of gamma curve can capture a far greater contrast range, but it’s not quite as simple as that.
Un-natural image response area
Look at the area shaded red on the graph. This is the area where the cameras capture gamma curve deviates from the standard gamma curve used not just for image capture but also for image display. What this means is that the area of the image shaded in red will not look natural because where something in that part of the filmed scene gets 100% brighter it will only be displayed as getting 50% brighter for example. In practice what this means is that while you are capturing a greater brightness range you will also need to grade or correct this range somewhat in the post production process to make the image look natural. Generally scenes shot using hypergammas or cinegammas can look a little washed out or flat. Cinegammas and Hypergammas keep the important central exposure range nice an linear, so the region from black up to around 75% is much like a standard gamma curve, so faces, skin, flora and fauna tend to have a natural contrast range, it is only really highlights such as the sky that is getting compressed and we don’t tend to notice this much in the end picture. This is because our visual system is very good at discerning fine detail in shadow and mid tones but less accurate in highlights, so we tend not to find this high light compression objectionable.
S-Log Gamma Curve
Taking things a step further this even more extreme gamma curve is similar to Sony’s S-Log gamma curve. As you can see this deviates greatly from the standard gamma curve. Now the entire linear output of the sensor is sampled using a logarithmic scale. This allows more of the data to be allocated to the shadows and midtones where the eye is most sensitive. The end result is a huge improvement in the recorded dynamic range (greater than 12 stops) combined with less data being used for highlights and more being used where it counts. However, the image when viewed on a standard monitor with no correction that looks very washed out, lacks contrast and generally looks incredibly flat and uninteresting.
Red area indicates where image will not look natural with S-Log without LUT
In fact the uncorrected image is so flat and washed out that it can make judging the optimum exposure difficult and crews using S-Log will often use traditional light meters to set the exposure rather than a monitor or rely on zebras and known references such as grey cards. For on set monitoring with S-Log you need to apply a LUT (look Up Table) to the cameras output. A LUT is in effect a reverse gamma curve that cancels out the S-Log curve so that the image you see on the monitor is closer to a standard gamma image or your desired final pictures. The problem with this though is that the monitor is now no longer showing the full contrast range being captured and recorded so accurate exposure assessment can be tricky as you may want to bias your exposure range towards light or dark depending on how you will grade the final production. In addition because you absolutely must adjust the image in post production quite heavily to get an acceptable and pleasing image it is vital that the recording method is up to the job. Highly compressed 8 bit codecs are not good enough for S-Log. That’s why S-Log is normally recorded using 10 bit 4:4:4 with very low compression ratios. Any compression artefacts can become exaggerated when the image is manipulated and pushed and pulled in the grade to give a pleasing image. You could use 4:2:2 10 bit at a push, but the chroma sub sampling may lead to banding in highly saturated areas, really Hypergammas and Cinegammas are better suited to 4:2:2 and S-Log is best reserved for 4:4:4.
When is 4k really 4k, Bayer Sensors and resolution.
When is 4k really 4k, Bayer Sensors and resolution.
First lets clarify a couple of term. Resolution can be expressed two ways. It can be expressed as pixel resolution, ie how many individual pixels are there on the sensor. Or as TV lines or TVL/ph, or how many individual lines can I see. If you point a camera at a resolution chart, what you talking about is at what point can I no longer discern one black line from the next. TVL/ph is also the resolution normalised for the picture height, so aspect ratio does not confuse the equation. TVL/ph is a measure of the actual resolution of the camera system. With video cameras TVL/ph is the normally quoted term, while pixel resolution or pixel count is often quoted for film replacement cameras. I believe the TVL/ph term to be prefferable as it is a true measure of the visible resolution of the camera.
The term 4k started in film with the use af 4k digital intermediate files for post production and compositing. The exposed film is scanned using a single row scanner that is 4,096 pixels wide. Each line of the film is scanned 3 times, once each through a red, green and blue filter, so each line is made up of three 4K pixel scans, a total of just under 12k per line. Then the next line is scanned in the same manner all the way to the bottom of the frame. For a 35mm 1.33 aspect ratio film frame (4×3) that equates to roughly 4K x 3K. So the end result is that each 35mm film frame is sampled using 3 (RGB) x 4k x 3k, or 36 million samples. That is what 4k originally meant, a 4k x 3k x3 intermediate file.
Putting that into Red One perspective, it has a sensor with 8 Million pixels, so the highest possible sample size would be 8 million samples. Red Epic 13.8 million. But it doesn’t stop there because Red (like the F3) use a Bayer sensor where the pixels have to sample the 3 primary colours. As the human eye is most sensitive to resolution in the middle of the colour spectrum, twice as many of these pixel are used for green compared to red and blue. So you have an array made up of blocks of 4 pixels, BG above GR.
Now all video cameras (at least all correctly designed ones) include a low pass filter in the optical path, right in front of the sensor. This is there to prevent moire that would be created by the fixed pattern of the pixels or samples. To work correctly and completely eliminate moire and aliasing you have to reduce the resolution of the image falling on the sensor below that of the pixel sample rate. You don’t want fine details that the sensor cannot resolve falling on to the sensor, because the missing picture information will create strange patterns called moire and aliasing.
It is impossible to produce an Optical Low Pass Filter that has an instant cut off point and we don’t want any picture detail that cannot be resolved falling on the sensor, so the filter cut-off must start below the sensor resolution. Next we have to consider that a 4k bayer sensor is in effect a 2K horizontal pixel Green sensor combined with a 1K Red and 1K Blue sensor, so where do you put the low pass cut-off? As information from the four pixels in the bayer patter is interpolated, left/right/up/down there is some room to have the low pass cut off above the 2k pixel of the green channel but this can lead to problems when shooting objects that contain lots of primary colours. If you set the low pass filter to satisfy the Green channel you will get strong aliasing in the R and B channels. If you put it so there would be no aliasing in the R and B channels the image would be very soft indeed. So camera manufacturers will put the low pass cut-off somewhere between the two leading to trade offs in resolution and aliasing. This is why with bayer cameras you often see those little coloured blue and red sparkles around edges in highly saturated parts of the image. It’s aliasing in the R and B channels. This problem is governed by the laws of physics and optics and there is very little that the camera manufacturers can do about it.
In the real world this means that a 4k bayer sensor cannot resolve more than about 1.5k to 1.8k TVL/ph without serious aliasing issues. Compare this with a 3 chip design with separate RGB sensors. With a three 1920×1080 pixel sensors, even with a sharp cut-off low pass filter to eliminate any aliasing in all the channels you should still get at 1k TVL/ph. That’s one reason why bayer sensors despite being around since the 70s and being cheaper to manufacture than 3 chip designs (with their own issues created by big thick prisms) have struggled to make serious inroads into professional equipment. This is starting to change now as it becomes cheaper to make high quality, high pixel count sensors allowing you to add ever more pixels to get higher resolution, like the F35 with it’s (non bayer) 14.4 million pixels.
This is a simplified look at whats going on with these sensors, but it highlights the fact that 4k does not mean 4k, in fact it doesn’t even mean 2k TVL/ph, the laws of physics prevent that. In reality even the very best 4k pixels bayer sensor should NOT be resolving more than 2.5k TVL/ph. If it is it will have serious aliasing issues.
After all that, those that I have not lost yet are probably thinking: well hang on a minute, what about that film scan, why doesn’t that alias as there is no low pass filter there? Well two things are going on. One is that the dynamic structure of all those particles used to create a film image, which is different from frame to frame reduces the fixed pattern effects of the sampling, which causes the aliasing to be totally different from frame to frame so it is far less noticeable. The other is that those particles are of a finite size so the film itself acts as the low pass filter, because it’s resolution is typically lower than that of the 4k scanner.
JVC GS-TD1 3D camcorder launched at CES.
JVC GS-TD1 3D Camcorder
Everyone is at it! Hot on the heels of the Sony TD10 comes the JVC TD1. With such similar names and numbers this is going to get confusing fast! Anyway this is another dual stream full 1920×1080 3D camcorder with some impressive specifications. This taken from the JVC press release:
The new GS-TD1 uses two camera lenses and two 3.32 megapixel CMOS sensors – one for each lens – to capture three-dimensional images much the same way that human eyes work. JVC’s new high-speed imaging engine simultaneously processes the two Full HD images – left and right images at 1920 x 1080i – within that single chip. The newly developed “LR Independent Format” makes the GS-TD1 the world’s first consumer-oriented camcorder capable of 3D shooting in Full HD. JVC’s new camcorder offers other shooting modes as well, including the widely used “Side-by-Side Format” for AVCHD (3D) and conventional AVCHD (2D) shooting.
Side by side recording is going to be very usefull for going direct to consumer TV’s or for YouTube uploads so this is a nice feature indeed. It appears to only have a 5x optical zoom in 3D compare to the Sony’s 10x, like the Sony it features image stabilisation. It’s certainly an impressive looking unit. The flip out LCD screen once again uses some kind of parallax barrier for 3D viewing without glasses. The consumer 3D market is certainly growing at a rapid rate and I’m really excited about these new cameras. Sony.. JVC.. Anyone want to lend me one for my 3D shoot in Iceland in March???
The GS-TD1 should be available in March for $1995. More details on the JVC web site: http://newsroom.jvc.com/2011/01/jvc-full-hd-3d-consumer-camcorder-is-world’s-first/
My Product of the Year 2010.
Well we are now in to 2011 so it’s time to look back at 2010 and some of the products that became available. Last year my award went to the excellent Convergent Designs NanoFlash. As with last year there is no real meaning to the award, it’s just an excuse for me to highlight my favourite product from 2010.
So what was new in 2010? There were some significant announcements of new products like the Sony PMW-F3 and the un named NXCAM but these won’t be available until 2011. Sony did release the PMW-320, 1/2? shoulder mount camcorder to compliment the PMW-350. I was at first a little sceptical about this camera, but it does produce a good image and the price is attractive where you need to have the looks and ergonomics of a shoulder mount camera but don’t need high end 2/3? sensors and lenses. So the 320 gets good points for value and ergonomics, but it’s not a stand out product. Later in the year we saw the release of the PMW-500. This was the logical combination of a high end CCD camera with Sony’s solid state SxS recording system. The PMW-500 is a fantastic camcorder that will be excellent for news and documentary production. I’m sure it will do very well indeed and users will appreciate the light weight and low power consumption. However again for me it isn’t a stand out product, it’s very nice but you have to pay a significant premium for those CCD’s and 50Mb/s recording and really it is a completely logical extension of the Sony XDCAM product family.
Jumping out of the Sony camp there is Panasonics new AF100/AF101 with it’s APS-C sized sensor. Canon and their video enabled DSLR’s showed what could be achieved with a big sensor, however the DSLR’s were, first and foremost, high resolution stills cameras with 12 megapixel (or more) sensors. The video was an afterthought and suffered from various artefacts as a result, but they really had a huge impact on the whole industry, forcing the big guns of the video world to seriously re-think. Not to be left behind Panasonic and Sony had to jump on the big sensor band wagon. The first to market was the Sony NEX-VG10 which is basically a stills camera pretending to be a video camera. It’s not bad and can produce a good image but it’s not really a professional product. The next to market was the Panasonic AF100. This is a serious attempt at producing a low cost, big sensor video camera. The sensor is APS-C sized, so it’s not quite as big as would be found in a 35mm film camera, but the smaller sensor does allow for the use of a very wide range of DSLR lenses and the Depth of Field is pleasing when you use a fast lens. Sadly Panasonic chose to use AVCHD for the codec, so for best results you really want to record using an external high quality recorder. This camera would have been sooo much better if it used AVC-Intra. Despite the codec (and it’s looks) the AF100 was certainly a stand out product and gets added to my shortlist for my award.
On the camcorder front there was of course the Canon XF305. This is a very good camcorder, of that there is no doubt. I’m still a little skeptical of the sensor performance, they look a little noisy too me. However it has certainly raised the bar when it comes to 1/3? sensor performance. The incorporation of a 50 Mb/s 4:2:2 codec in to a compact camcorder is something that Sony EX users have been clamouring for ever since the launch of the EX1. In addition the extra zoom range from the 20x lens is nice to have. The Canon XF305 certainly stands out from the crowd with it’s excellent 50Mb/s codec so it’s definitely in my shortlist.
One product that I really like is the Black Magic HDLink 3D. This clever little box allows you to combine the output of any pair of HDSDi equipped cameras on a 3D rig and gives a huge range of outputs compatible with most off the shelf 3D consumer TV’s and PC monitors. This one product has made 3D monitoring so much cheaper and easier than ever before. What’s more it’s remarkably low cost at around $499 USD. So this too deserves to get shortlisted, but it’s overshadowed by another computer adapter that’s slowly getting quite a following:
The Matrox MXO2 range is a range of input and output adapters for Mac computers. These boxes, depending on the exact model give you HDSDI, HDMI and component inputs and outputs. They will work with a MacBook Pro Laptop connecting via the express card slot or with MacPro work stations. There’s hardware up and down scaling, a range of encoding accelerators and 3D monitoring tools. They have so many applications form providing HDSDI or HDMI monitoring for Avid or FCP to a way to record 10 bit HD on location via a laptop. They support XDCAM, RED, DVCPRO HD, PRORES and DNxHD workflows. An MXO2 could easily become the center point of many a production facility, OB truck or one man band.
For the flexibility, cost effectiveness and affordability the Matrox MXO2 gets my award for product of the year 2010. It has so many uses that it’s impossible to list them all. It’s one of those boxes that you will find useful for so many things and the best bit is that it’s highly affordable.
MTF services to produce Nikon adapter for F3
Well no surprises here to be honest but Mike Tapa of MTF has already finalised the design of an adapter that will allow users of Sony’s still to be released PMW-F3 to use low cost (compared to PL) Nikon DSLR lenses. This open up a huge range of lens options and I’m quite sure that with good high end lenses the results will be very good. It’s certainly the way I will be going.
http://www.lensadaptor.com/
Why do my pictures go soft when I pan? Camera Detail Correction in depth.
Why do my pictures go soft when I pan? Camera Detail Correction in depth.
This article is my Christmas present for my readers. When your trying to set up a camera or brew up a picture profile it really helps if you understand the ramifications of each of the settings. I hope this helps explain how detail correction works and how it effects your image.
I am often asked to explain why someones images are going soft when they pan the camera or when there is a lot of movement in the scene. Well this can be down to many things including poor compression or too low a bit rate for the recording, but the two main issues are shutter speed (which is tied in to your frame rate) and detail correction. I’ll cover frame rates and shutter speeds in the near future, but today I’m going to look at Detail Correction.
First of all what is detail correction for? Well originally it was used to compensate for the low resolution of domestic cathode ray tube TV’s and the limited speed at which a CRT TV could go from light to dark. Modern LCD, Plasma and OLED displays handle this much better, but still detail correction remains important to this day to as a way of adding the appearance of additional sharpness to a video image. You’ll often see extreme examples of it on SD TV shows as a dark halo around objects.
The image above is of an imaginary perfect greyscale chart. Looking at it you can see on your screen that each grey bar is quite distinct from the next and the edge between the two is sharp and clear. You computer screen should be quite capable of showing an instant switch from one grey level to the next.
Now if we add the waveform that the “perfect” greyscale would give we can see that the transition from each bar to the next is represented by a nice crisp instant step down, the transition from one bar to the next happening over a single pixel.
The image above represents what a typical video camera might reproduce if it shot the greyscale chart without any form of detail correction or sharpening. Due to the need to avoid aliasing, lens performance and other factors it is impossible to get perfect optical performance so there is some inevitable blurring of the edges between the grey bars. Note that these images are for illustration only, so I have exaggerated the effect. I would expect a good HD camera to still produce a reasonably sharp image.
Looking at the cameras waveform you can see that the nice square edges we saw in on the perfect greyscale waveform have gone and instead the transition from bar to bar is more rounded. Now there are two things that camera manufactures commonly do to correct or compensate for this. One is called aperture correction which is a high frequency signal boost (I’ll explain that another time) but the one were going to look at in this case is called detail correction often simply referred to as “Detail”.
So what happens in the camera? Well the camera constantly compares the video luminance (brightness) levels of the image over a set time period. This time period is incredibly short and in the example given here is the time it takes for the cameras line scan to go left to right from point A to point B. If the difference in the brightness or luminance of the two samples is greater than the threshold set for the application of detail correction (known as crispening on Sony cameras) then the detail circuit kicks in and adds a light or dark enhancement to the brightness change.
With an HD video camera the light or dark edges added by the detail correction circuit are typically only a few pixels wide. On an SD camera they are often much wider. On a Sony camera the detail frequency setting will make the edges thicker (negative value) or thinner (positive value). The Black and White limit settings will limit how bright or how dark the added correction will be and the detail level control determines just how much correction is added to the image overall.
One important thing to consider is that as the amount of detail correction that is applied to the image is dependant on differences in the image luminance measured over time, so you have to consider what happens when the scene is moving or the camera pans. Two things happen when you pan the camera, one is that the image will blur a little due to things moving through the frame while the shutter is open and from line to line objects will be in a slightly different position.
So looking at the waveform we can see that the waveform slope from one grey bar to the next becomes shallower due to the blur induced through the motion of the camera. If we now sample the this slightly blurred image using the same timescale as before we can see that the difference in amplitude (brightness) between the new blue samples at A and B is significantly smaller than the difference between the original red sample points.
What this means in practice is that if the difference between the A and B sample drops below the threshold set for the application of detail correction then it is not applied. So what happens is that as you pan (or there is motion in the scene) the slight image softening due to motion blur will decrease the amount of detail correction being applied to the image so the picture appears to noticeably soften, especially if you are using a high detail correction level.
Detail correction is applied to both horizontal image differences as outlined above and also to vertical differences. As the vertical sampling is taken over 2 or 3 image lines there is much longer time gap between the samples. So when you pan, an object that was in one position on one line may have moved significantly enough by the time the frame scan has progressed 2 more lines that it is in a different position so the detail sampling will be wrong and detail may not be applied at all.
If you are finding that you are seeing an annoying amount of image softening when you pan or move your camera then you may want to consider backing off your detail settings as this will reduce the difference between the detail “on” look and detail “off” look during the pan or movement. If this softens your images too much for your liking then you can compensate by using Aperture Correction (if your camera has this) to boost the sharpness of your image. I’ll explain sharpness in more depth in a later article.
Merry Christmas!