Which part of the capture pipeline is the Chunk Data Timestamp based off?
AnsweredHello,
We're performing some latency testing on a BlackFly-S-BFS-U3-16S2M. Our glass-to-glass testing indicates that we have a delay of about 80 milliseconds that is unaccounted for and we're trying to identify where it's coming from. We are actually measuring a total latency of about 120 milliseconds but we have measurements on the hardware trigger time to the image rendering call and subtracted that from 120 milliseconds. We can visually observe the delay using the SpinView software and our implementation through a blinking LED. One part that we want to check is to be able to get the time the image was captured by the camera and compare the time to when the image was received. This is to check if the delay is happening between the capture and when the image is available in the host computer. I saw that we can get an timestamp associated with an image through the chunk data. I couldn't get any information on where in the capture pipeline the time is taken. Is it from like before exposure, right after exposure, during the sending process of the image, is it when it is already in the host computer, etc.? Are there any other timestamps available on the camera side associated with a captured image aside from the one in the chunk data? Also, if you have information on the observed latency, can you provide any possible cause or solution? We did configure Stream buffer handling mode to Newest Only but it had no observable effect.
Thank you,
Raymond
-
Official comment
Hello Raymond,
If you are seeing such long latency, it is likely on the driver/software side, to get the buffer into user level, converted to BGRU, and then displayed.
The chunk data timestamp is the camera timestamp at the end of exposure. Note though that this is not the same as PC timestamp, so to see "latency", I would read the TimestampLatchValue after you get your image to get the current "camera time", then you can see an approximate delay from end of exposure to having the data.
More details on the TimestampLatchValue can be found in our manual at Device Control - BFS-U3-16S2 Version 1707.0.125.0 (flir.com).
If you are using NewestOnly, and you are sitting in a blocking grab call, you should get your image as soon as the image is complete in memory. I suggest to read the timestamp at that point to verify how much of the delay is post-image grab.
Thank you,
Demos
-
Hello Raymond. I am just following up to see if the above feedback helped, or if you still have open questions about any unaccounted latency in your testing.
Thanks,
Demos0 -
Hello Demos,
Yes it did. Our measurements match your description. We believe that the 80 milliseconds delay that we are seeing is from the rendering side. Our configuration uses Linux and are using X11 and OpenGL to display the images. We have performance measurements on the OpenGL calls but it's hard to measure how long does an image takes to be displayed on screen after swapping buffers. Please let us know if you have any solutions on lowering display latency.
Thank you,
Raymond
0
Please sign in to leave a comment.
Comments
3 comments