Target detection in bad weather

Posted by Hobgoblin11 on Tue, 01 Mar 2022 06:47:08 +0100

reference resources Target detection in bad weather - cloud + community - Tencent cloud

1. Data based approach

(1),Benchmarking Robustness in Object Detection: Autonomous Driving when Winter is Coming

Regardless of image distortion or weather conditions, the ability to detect objects is crucial for practical applications of deep learning such as autonomous driving. Here we provide an easy-to-use benchmark to evaluate the performance of the target detection model when the image quality decreases. The three benchmark data sets generated, called PASCAL VOC-C, COCO-C and Cityscape-C, contain a variety of image corruption. We show that a series of standard target detection models suffer serious performance loss on damaged images (as low as 30-60% of the original performance). However, a simple data addition technique - stylization of training images - leads to a significant increase in robustness across damage types, severity, and data sets. We envision our integrated benchmark to track future progress in building robust target detection models. Benchmarks, codes, and data are public.

@article{michaelis2019benchmarking,
  title={Benchmarking robustness in object detection: Autonomous driving when winter is coming},
  author={Michaelis, Claudio and Mitzkus, Benjamin and Geirhos, Robert and Rusak, Evgenia and Bringmann, Oliver and Ecker, Alexander S and Bethge, Matthias and Brendel, Wieland},
  journal={arXiv preprint arXiv:1907.07484},
  year={2019}
}

(2),A Deep Learning-based Radar and Camera Sensor Fusion Architecture for Object Detection

In recent years, target detection in camera images using deep learning has been proved to be successful. The continuously improved detection rate and efficient computing network structure push this technology to the application of production vehicles. However, the sensor quality of the camera is limited in bad weather conditions, and the sensor noise will be increased in areas with sparse light and at night. Our method enhances the current 2D target detection network by fusing camera data and projection sparse radar data at the network layer. The proposed camera fusion network (CRF net) automatically learns which level of sensor data fusion is most beneficial to the detection results. In addition, we introduced BlackIn, a Dropout inspired training strategy that focuses on specific sensor types. We show that for two different data sets, the performance of the fusion network is better than the existing pure image network. The code for this study will be made available to the public: https://github.com/TUMFTM/CameraRadarFusionNet

@inproceedings{nobis2019deep,
  title={A deep learning-based radar and camera sensor fusion architecture for object detection},
  author={Nobis, Felix and Geisslinger, Maximilian and Weber, Markus and Betz, Johannes and Lienkamp, Markus},
  booktitle={2019 Sensor Data Fusion: Trends, Solutions, Applications (SDF)},
  pages={1--7},
  year={2019},
  organization={IEEE}
}

(3),Raindrops on the Windshield: Performance Assessment of Camera-based Object Detection

The camera system captures images from the surrounding environment and processes these data to detect and classify objects. A special category of failures and failures is adverse weather conditions. In this case, it is well known that, for example, rainwater reduces the performance of the sensor by absorbing and scattering falling raindrops. In addition, due to the interference of the external field of vision, the object detection of the on-board camera will be affected by the raindrops on the windshield. In order to counteract this effect, windshield wipers are installed to improve the driver's visual information and benefit the image sensor. This paper studies the image quality of a wipe action and the correctness of target detection in rainy days. The results show that with the accumulation of raindrops on the windshield, the image quality and target detection performance decline. In addition, it shows the trade-off between performance and time. Therefore, adaptive weighted data fusion should be implemented to make decisions according to the most reliable frame of sensor data.

@inproceedings{hasirlioglu2019raindrops,
  title={Raindrops on the windshield: Performance assessment of camera-based object detection},
  author={Hasirlioglu, Sinan and Reway, Fabio and Klingenberg, Tim and Riener, Andreas and Huber, Werner},
  booktitle={2019 IEEE International Conference on Vehicular Electronics and Safety (ICVES)},
  pages={1--7},
  year={2019},
  organization={IEEE}
}

(4),Vehicle Detection and Width Estimation in Rain by Fusing Radar and Vision*

Although people have made a lot of efforts in deep learning target detection, target detection in bad weather (such as rain, snow or haze) has received relatively limited attention. In heavy rain, raindrops on the front windshield will make it difficult for the camera in the car to detect objects. The traditional method to solve this problem is to use radar as the main detection sensor. However, radar is prone to false positives. In addition, many entry-level radar sensors only return the centroid of each detected object, not its size and range. In addition, due to the lack of texture input, the radar cannot distinguish between vehicle and non vehicle targets, such as roadside poles. This prompted us to detect vehicles by fusing radar and vision. In this paper, we first calibrate the radar and camera relative to the ground plane. Then the radar detection is projected onto the camera image for target width estimation. Empirical evaluation of large databases shows that there is a natural synergy between the two sensors, because the accuracy of radar detection greatly promotes image-based estimation.

@inproceedings{wang2018vehicle,
  title={Vehicle detection and width estimation in rain by fusing radar and vision},
  author={Wang, Jian-Gang and Chen, Simon Jian and Zhou, Lu-Bing and Wan, Kong-Wah and Yau, Wei-Yun},
  booktitle={2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV)},
  pages={1063--1068},
  year={2018},
  organization={IEEE}
}

(5),Seeing Through Fog Without Seeing Fog: Deep Multimodal Sensor Fusion in Unseen Adverse Weather

The fusion of multi-modal sensor streams such as camera, lidar and radar measurement plays a vital role in the target detection of autonomous vehicles. The decision-making of autonomous vehicles is based on these inputs. Although existing methods use redundant information in good environmental conditions, they will fail in bad weather conditions, because the sensory flow will be distorted asymmetrically in bad weather conditions. These rare "edge case" scenarios are not shown in the available data sets, and the existing fusion architecture is not designed to deal with them. In order to meet this challenge, we propose a new multimodal data set, which is obtained after driving more than 10000 kilometers in northern Europe. Although this data set is the first large multimodal data set in adverse weather, with 100k lidar, camera, radar and gated near-infrared sensor tags, it is not conducive to training due to few extreme weather. Therefore, we propose a deep fusion network for robust fusion without covering a large number of labeled training data with all asymmetric distortions. Starting from the suggestion level fusion, we propose a single time model of adaptive fusion features driven by measurement entropy. We validated the method based on clean data training on our extensive validation data set. Codes and data can be found here: https://github.com/princeton-computational-imaging/SeeingThroughFog

@inproceedings{bijelic2020seeing,
  title={Seeing through fog without seeing fog: Deep multimodal sensor fusion in unseen adverse weather},
  author={Bijelic, Mario and Gruber, Tobias and Mannan, Fahim and Kraus, Florian and Ritter, Werner and Dietmayer, Klaus and Heide, Felix},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={11682--11692},
  year={2020}
}

(6),Data Augmentation for Improving SSD Performance in Rainy Weather Conditions

This paper mainly studies how to improve the performance of SSD detector by using data enhancement technology in rainy weather. Data enhancement is applied by generating synthetic raindrops on a set of clear images to expand the data set for training. The experimental results show that the performance of SSD model is improved by 16.82% when trained on the proposed enhanced synthetic data set.

@inproceedings{hoang2020data,
  title={Data Augmentation for Improving SSD Performance in Rainy Weather Conditions},
  author={Hoang, Quoc-Viet and Le, Trung-Hieu and Huang, Shih-Chia},
  booktitle={2020 IEEE International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan)},
  pages={1--2},
  year={2020},
  organization={IEEE}
}

(7),Accelerated Fog Removal From Real Images for Car Detection

Image defogging improves the visual quality of images in computer vision applications, such as target detection and target tracking. An accelerated image enhancement technique for vehicle detection is proposed as part of an effort to count vehicles using existing street cameras for traffic management purposes. Two aspects of vehicle detection are solved: 1) the existing image defogging technology is accelerated by replacing the time-consuming image filter with a faster filter, while maintaining negligible image degradation. 2) a fast and practical algorithm for vehicle detection in fog free images is proposed and applied to the database of about 100 vehicle images. In addition to vehicle detection accuracy, acceleration is the main goal of this study. The improved fog removal technique is performed by using the proposed adaptive filter (PAF) to estimate the transmission map to restore the scene depth of the fog image. After filtering, a simple, accurate and effective vehicle detection algorithm is executed to confirm whether there are vehicles in the processed image. The system is quite robust. Although all images are obtained from existing sources, the proposed algorithm is expected to have the same good performance for any side view image of the vehicle under dense fog and real conditions.

@inproceedings{younis2017accelerated,
  title={Accelerated fog removal from real images for car detection},
  author={Younis, Rawan and Bastaki, Nabil},
  booktitle={2017 9th IEEE-GCC Conference and Exhibition (GCCCE)},
  pages={1--6},
  year={2017},
  organization={IEEE}
}

(8),Object Detection in Fog Degraded Images

Image processing is a technology to extract valuable data from a given image for different purposes, such as improving image visualization and measuring structures or features from the extracted data. High quality images and videos are easy to predict and classify, and detecting blurred or blurred images is a troublesome problem. In this paper, an effective method is proposed, which combines a variety of image processing technologies such as discrete wavelet transform and convolutional neural network to preprocess the defogging image. The proposed technology greatly improves the relevant standard performance indicators, such as PSNR, MSE and IIE.

@article{singh2018object,
  title={Object Detection in Fog Degraded Images},
  author={Singh, Gurveer and Singh, Ashima},
  journal={International Journal of Computer Science and Information Security (IJCSIS)},
  volume={16},
  number={8},
  year={2018}
}

2. Method based on domain adaptation

(1),Prior-based Domain Adaptive Object Detection for Hazy and Rainy Conditions

Bad weather conditions, such as haze and rainfall, will destroy the quality of captured images, resulting in poor performance of detection networks trained on clean images. In order to solve this problem, we propose an unsupervised a priori based domain countermeasure target detection framework to adapt to the fog and rain conditions of the detector. Based on these, we use the prior knowledge of specific weather obtained by using the principle of image formation to define a new prior against loss. The a priori countermeasure loss used to train the adaptive process aims to reduce the weather specific information in the features, so as to reduce the impact of weather on the detection performance. In addition, we introduce a set of residual feature recovery blocks into the target detection pipeline to eliminate the distortion of feature space, so as to further improve it. The evaluation under different conditions (such as haze and rain) on different data sets (Fog - cityscapes, rain cityscapes, RTTS and UFDD) shows the effectiveness of the proposed method.

@inproceedings{sindagi2020prior,
  title={Prior-based domain adaptive object detection for hazy and rainy conditions},
  author={Sindagi, Vishwanath A and Oza, Poojan and Yasarla, Rajeev and Patel, Vishal M},
  booktitle={European Conference on Computer Vision},
  pages={763--780},
  year={2020},
  organization={Springer}
}

(2),Cross-domain Learning Using Optimized Pseudo Labels: Towards Adaptive Car Detection in Different Weather Conditions and Urban Cities

Target detection based on convolutional neural network (CNN) usually assumes that the training data and test data have the same distribution. However, this is not always true in practical applications. In autopilot, driving scene (target domain) is composed of unconstrained road environment. These environments can not be observed in training data (source domain), which will lead to a sharp decline in the accuracy of detectors. In this paper, a domain adaptive framework based on pseudo label is proposed to solve the problem of domain drift. Firstly, the false label of the target domain image is generated by the baseline detector (BD) and optimized by the data optimization module to correct the error; Then, the hard samples in a single image are labeled according to the optimization results of the pseudo label. The adaptive sampling module is used to sample the target domain data according to the number of hard samples of each image to select more effective data. Finally, the improved knowledge loss is applied to the retraining module, and two methods of assigning soft tags to training examples in the target domain are studied to retrain the detector. We evaluate the average accuracy of the method in different source / target domain pairs, and verify that the average BD accuracy of the framework in multi domain adaptive scenarios is improved by more than 10% on Cityscapes, KITTI and Apollo data sets.

@article{zhang2021cross,
  title={Cross-domain Learning Using Optimized Pseudo Labels: Towards Adaptive Car Detection in Different Weather Conditions and Urban Cities},
  author={Zhang, Lianhua and Xia, Qin and Pu, Liang and Chen, Junlan and others},
  year={2021}
}

3. Method of modifying detector structure

(1),DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions

In the past five years, the target detection method based on convolutional neural network has been widely studied and successfully applied to many computer vision applications. However, due to low visibility, detecting objects in adverse weather conditions remains a major challenge. In this paper, we introduce a new twin network (DSNet) to solve the problem of target detection in fog. The network can carry out end-to-end training and jointly learn three tasks: Visibility Enhancement, target classification and target positioning. DSNet achieves complete performance improvement by including two subnets: detection subnet and recovery subnet. We use retina as the backbone network (also known as detection subnet), which is responsible for learning to classify and locate targets. The recovery subnet is designed by sharing the feature extraction layer with the detection subnet and using the feature recovery (FR) module to enhance visibility. The experimental results show that the average accuracy of our DSNet is 50.84% on the synthetic FOG data set and 41.91% on the public natural fog data set. While maintaining a high speed, it is superior to the combination model of many advanced target detectors and defogging and detection methods.

@ARTICLE{9022905,
  author={Huang, Shih-Chia and Le, Trung-Hieu and Jaw, Da-Wei},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence}, 
  title={DSNet: Joint Semantic Learning for Object Detection in Inclement Weather Conditions}, 
  year={2020},
  volume={},
  number={},
  pages={1-1},
  doi={10.1109/TPAMI.2020.2977911}}

(2),A Real-Time Vehicle Detection System under Various Bad Weather Conditions Based on a Deep Learning Model without Retraining

Many vehicle detection methods have been proposed to obtain reliable traffic data for the development of intelligent transportation system. Most of these methods perform well in common scenarios, such as sunny or cloudy days; However, under various adverse weather conditions, such as rainy days or glare days (usually during sunset), the detection accuracy will decline sharply. In this study, a vehicle detection system with visibility complementary module is proposed, which improves the detection accuracy under various bad weather conditions. In addition, the proposed system can be implemented without retraining the deep learning model for target detection under different weather conditions. The complementarity of visibility is obtained by using dark channel a priori and convolutional encoder decoder deep learning network. The network has double residual blocks to solve the different effects of different bad weather conditions. By using the "YOLO" deep learning model to detect vehicles, we have verified our system on multiple surveillance videos and proved that the average computing time of our system can reach 30 frames / second; In addition, the accuracy is improved by nearly 5% not only in low contrast scenes, but also 50% in rainy scenes. Our demonstration results show that our method can detect vehicles in various adverse weather conditions without retraining new models.

@article{chen2020real,
  title={A real-time vehicle detection system under various bad weather conditions based on a deep learning model without retraining},
  author={Chen, Xiu-Zhi and Chang, Chieh-Min and Yu, Chao-Wei and Chen, Yen-Lin},
  journal={Sensors},
  volume={20},
  number={20},
  pages={5731},
  year={2020},
  publisher={Multidisciplinary Digital Publishing Institute}
}

5. Data set

(1),A NOVEL DATASET FOR OBJECT DETECTION IN RAINY WEATHER CONDITIONS

In recent years, the target detection model based on convolutional neural network has received extensive attention in autonomous driving system and achieved remarkable results. However, in bad weather conditions, the performance of these models is greatly reduced due to the lack of relevant data sets for training strategies. In this work, we solve the problem of target detection under rain interference by introducing a new rain driven data set named RD. Our dataset highlights a variety of data sources, with 1100 real rainy day images depicting various driving scenes, with ground real bounding box annotations for five common traffic target categories. This paper uses R & D to train three most advanced target detection models, including SSD512, RetinaNet and YOLO V3. The experimental results show that the performance of SSD512, RetinaNet and YOLO-V3 models is improved by 5.64%, 8.97% and 5.70% respectively.

@article{hoang2020rd,
  title={RD: A NOVEL DATASET FOR OBJECT DETECTION IN RAINY WEATHER CONDITIONS},
  author={Hoang, Quoc-Viet and Le, Trung-Hieu and Nguyen, Minh-Quy},
  journal={UTEHY Journal of Science and Technology},
  volume={27},
  pages={86--90},
  year={2020}
}

Topics: Machine Learning Computer Vision Deep Learning