|Posted by Ferhat Kurtulmuş|
in reply to Sergey
Posted in reply to Sergey
On Tuesday, 28 February 2023 at 12:29:05 UTC, Sergey wrote:
On Tuesday, 28 February 2023 at 12:08:14 UTC, Ferhat Kurtulmuş wrote:
On Wednesday, 15 February 2023 at 17:32:33 UTC, Ferhat Kurtulmuş wrote:
I heard you are not having fun enough with d today.
We have mir.ndslice and dcv, and then we should be able to run, for instance, tinyYOLOv3 with video streams. I believe that such applications will attract more people's attention to d.
Here is how it looks like and the source code:
Great job. Could we have any comparison in the performance/memory usage versus original solution in Python?
I have not conducted any comparisons yet. There are a lot of factors affecting performance. My old laptop lacks good cuda support, so I disabled the CUDA acceleration. I cannot give you a strongly backed test result, but I can say that preprocessing is not so costly in my example. The FPS drop is primarily due to onnxruntime itself. The preprocessing step only takes 2 or 3 msecs. The newer versions of onnxruntime have various backend options for acceleration, such as CUDA, tensorrt, directML (uses directX).