Enhancing Object Detection in Distorted Environments: A Seamless Integration of Classical Image Processing with Deep Learning Models
Abstract
Computer vision tasks are directly influenced by the conditions of image acquisition, especially in the context of object detection. Often, these conditions are beyond our control. In this paper, we introduce a method that seamlessly integrates with any computer vision model using deep learning to enhance its performance in distorted environments. Our method effectively mitigates the effects caused by various types of image distortions. It relies on classical image processing techniques capable of reducing distortions and enhancing image quality in a general manner, without requiring specific knowledge of the applied distortion type. Integration into any model during the preprocessing stage is straightforward. Furthermore, we’ve added new layers that analyze the enhanced image in a depthwise manner, running in parallel with the model backbone. We tested the method on the object detection task using the well-known computer vision model, You Only Look Once (YOLO), and the results reveal a significant improvement in Mean Average Precision (mAP). The implementation code can be found at: https://github.com/abbass-zain-eddine/Object-detectionunder-uncontrolled-acquisition-environment.git