Watch all Nvidia Jetson Inference videos: • HOW TO INSTALL JETPACK OS IN EXTERNAL...
In this video, we are going to see how we can run detectnet program which comes pre-installed with the Hello AI #jetson inference project. This detectnet tool is very handy as it can run inference on your video file or RTSP camera or a USB-based webcam. In this video, we have used a video file for inferencing.
Once you first run the detectnet, it is going to by default use the SSD-MOBILENET-V2 model which we downloaded during the build process. In the first run, detectnet will convert this model file into an engine file. This process can take up to 15mins. Once the engine file is generated, it will launch the window and you can the inferencing happening in the window. In our case, the model was easily able to detect all the vehicles in the video file and thus vehicle detection was working perfectly fine.
Next in the video, we covered how you can write your own #python based detectnet script which used #jetson python libraries. It is a very basic python code to do inferencing on your source file. In our case, it was able to detect all the vehicles in the video file and thus vehicle detection was working perfectly fine using Python script as well.
For more information, please visit the original repository: https://github.com/dusty-nv/jetson-in...
For any issues please contact [email protected]