Dedrone releases next-gen computer vision models for lower airspace awareness

Dedrone has made its next generation of computer vision models available for broad use by its customers. The Pythagoras 1 release for lower airspace awareness and counter-UAS features improvements in precision on all object classes and recall on most object classes. This next generation model is now powering all of Dedrone’s products.

Dedrone used NVIDIA’s machine learning graphics processing unit H100 as a training system and PyTorch as the framework for its computer vision models, which enabled the company to employ a completely new neural network architecture. The company’s existing data sets were augmented through both simulated data sets and the integration of active learning of the most interesting cases.

Dedrone employs data augmentation techniques using the AutoKat tool. AutoKat augments existing images by inpainting artificial objects into them, either with or without existing annotations. For this project, the company acquired various models, including 21 helicopters, seven planes, and 11 drones (including quadcopters, fixed-wing drones and three Group-3 drones), which can be scaled, oriented, and placed in any position within an image. “This method allows us to create a diverse set of images, particularly useful for balancing our dataset by generating many helicopter and plane annotations,” Dedrone said in a blog post. “It also helps in addressing the object size distribution, which is crucial for advancing our models to work with 4K images.”

Pythagoras 1 has delivered an average 20% speed increase of Dedrone’s video tracker in addition to improving accuracy, and reducing both false positives as well as false negatives. “These improvements can be measured by improved Mean Average Precision (mAP) and Mean Average Recall (mAR),” Dedrone said. “Recall and Precision are the two key metrics used to assess detector performance. In practice, there is a trade-off between these metrics. Increasing the threshold for classifying an object in the airspace will result in fewer false positives, thus improving precision. However, this tends to allow for more false negatives in practice, so recall is now lower.”

The company also reports a 14x improvement in average precision for extremely small drones. These improvements were observed across all objects and several spatial scales.

The new inference engine has reduced the time it takes for its model to execute on a video, which, in turn, allows Dedrone to leverage more “neurons” in its neural network for a similar cost of frames per second when compared to the company’s previous model. Additionally, the network can handle more pixels of information without taking a large hit in runtime when compared to Dedrone’s previous deployment method.  This change has enabled the company to quickly process and infer what is in 4k video, and consequently allow it to be ready for new airspace awareness challenges like Drone as First Responder.

For more information

Dedrone

Share this:
D-Fend advert. Click for website