The question is can you see a difference between a camera with a 10 bit output and one with an 8 bit output? This is being asked a lot right now, in particular in relation to the Sony FS100 and the Sony F3. The FS100 has an 8 bit output and the F3 is 10 bit.
If your looking at the raw camera output then you will find it just about impossible to see a difference with normal monitoring equipment. This is because internally the cameras process the images using more than 8 bits (probably at least 10 on the FS100, the EX3 is 12 bit) and then convert to 8 or 10 bit for output so you should have nice smooth mapping of graduations to the full 8 bit output. Then consider that most LCD monitors are not able to display even 8 bits. The vast majority of monitors have a 6 bit panel and even a rare 8 bit monitor wont display all 8 bits as it has to do a gamma correction at 8 bits and this results in less than 8 bits being displayed. 10 bit monitors are very rare and again as gamma correction is normally required there is rarely a 1:1 bit for bit mapping of the 10 bit signal, so even these don’t show the full 10 bits of the input signal. So it becomes apparent that when you view the original material the differences will not normally be visible and often the only way to determine what the output signal actually is is with a data analyser that can decode the HDSDi stream and tell you whether the 2 extra bits actually contain useful image data or are just padding.
Where the 8 bit, 10 bit difference will become apparent is after grading and post production. I wrote a more in depth article here: Why rendering form 8 bit to 8 bit can be a bad thing to do. But basically when you start manipulating an 8 bit image you will see banding issues a lot sooner than with 10 bit due to the reduced number of luma/color shades in 8 bit. Stretch out or compress 8 bit and some of those shades get removed or shifted and when the number of steps/shades is borderline to start with if you start throwing more away you will get issues.