Conversion of 16bit depth to 12bit depth
Hello,
I'm trying to measure image noise using bfs-pge-50s5m-c but getting a noise lower than shot noise.
My current setup is following:
Format: mono16
Size: 2448X2048
Gain: 0dB / exposure time: vary (100ms ~ 500ms) / Gamma disenable / LUT disenable / Target Gray value disenable.
File format: .tiff / uncompressed
Reason why I'm using mono16 not mono12Packed even our ADC is 12bits is following:
1. According to the document, we can compensate by shifting >>4.
2. We can't acquire as mono12Packed actually. Whenever we saved as mono12 (using example code with modification and GUI) we got 8 bit.
Way I calculated the noise is I uploaded samples to numpy and cut ROI. Then I just calculate signal average and noise for my samples.
As I told earlier, I keep facing the noise lower than shot noise which is impossible. One suspicious part is the way mono16 upscale 12bit from ADC.
When I make the image saturate and shifts 4 bits checked the raw value, I got 4094 not 4096. So I wonder even in the document says we simply add padding when convert 12bits to 16bits, is it possible that any information leaves in the padding zone?
(I also saw that when we read the values without bit shifting, values are not ends '0000'.)
Thanks,
Ohik
-
Hello Ohik,
How exactly do you determine that the noise is lower than shot noise?
Bit shifting the 16bit values is fine. You can ignore the 4 lsbs, even if they are not 0.
Could you please make sure your black level setting is >0 and you are not clipping the signal at 0?Best regards,
Manuel
0
Please sign in to leave a comment.
Comments
1 comment