Home | Behavioral | Applications | Datasets
By application | By year
Click on each entry below to see additional information.
CL-Drive | paper | link
-
Description: Dataset for studying driver cognitive load recorded in simulation
-
Data: scene video, EEG, ECG, EDA, eye-tracking
-
Annotations: cognitive load labels
@article{2024_T-ITS_Angkan, author = "Angkan, Prithila and Behinaein, Behnam and Mahmud, Zunayed and Bhatti, Anubhav and Rodenburg, Dirk and Hungler, Paul and Etemad, Ali", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Multimodal Brain--Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines", year = "2024" }
Used in papers:
Angkan et al., Multimodal Brain–Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines, Trans. ITS, 2024 | paper | code
Dataset(s): CL-Drive
@article{2024_T-ITS_Angkan,
author = "Angkan, Prithila and Behinaein, Behnam and Mahmud, Zunayed and Bhatti, Anubhav and Rodenburg, Dirk and Hungler, Paul and Etemad, Ali",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Multimodal Brain--Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines",
year = "2024"
}
Angkan et al., Multimodal Brain–Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines, Trans. ITS, 2024 | paper | code
-
Dataset(s): CL-Drive
@article{2024_T-ITS_Angkan, author = "Angkan, Prithila and Behinaein, Behnam and Mahmud, Zunayed and Bhatti, Anubhav and Rodenburg, Dirk and Hungler, Paul and Etemad, Ali", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Multimodal Brain--Computer Interface for In-Vehicle Driver Cognitive Load Measurement: Dataset and Baselines", year = "2024" }
HOIST | paper | link
-
Full name: Object Importance Estimation Using Counterfactual Reasoning
-
Description: Simulated driving scenarios with object importance annotations
-
Data: scene video (BEV)
-
Annotations: bounding boxes, object importance labels
@article{2024_RAL_Gupta, author = "Gupta, Pranay and Biswas, Abhijat and Admoni, Henny and Held, David", journal = "IEEE Robotics and Automation Letters", publisher = "IEEE", title = "Object Importance Estimation using Counterfactual Reasoning for Intelligent Driving", year = "2024" }
Used in papers:
Gupta et al., Object Importance Estimation Using Counterfactual Reasoning for Intelligent Driving, R-AL, 2024 | paper | code
Dataset(s): HOIST
@article{2024_R-AL_Gupta,
author = "Gupta, Pranay and Biswas, Abhijat and Admoni, Henny and Held, David",
journal = "IEEE Robotics and Automation Letters",
publisher = "IEEE",
title = "Object Importance Estimation using Counterfactual Reasoning for Intelligent Driving",
year = "2024"
}
Gupta et al., Object Importance Estimation Using Counterfactual Reasoning for Intelligent Driving, R-AL, 2024 | paper | code
-
Dataset(s): HOIST
@article{2024_R-AL_Gupta, author = "Gupta, Pranay and Biswas, Abhijat and Admoni, Henny and Held, David", journal = "IEEE Robotics and Automation Letters", publisher = "IEEE", title = "Object Importance Estimation using Counterfactual Reasoning for Intelligent Driving", year = "2024" }
IVGaze | paper | link
-
Full name: In-Vehicle Gaze Dataset
-
Description: 44K images of 25 subjects looking at different areas inside the vehicle
-
Data: driver video, eye-tracking
-
Annotations: gaze area labels
@inproceedings{2024_CVPR_Cheng, author = "Cheng, Yihua and Zhu, Yaning and Wang, Zongji and Hao, Hongquan and Liu, Yongwei and Cheng, Shiqing and Wang, Xi and Chang, Hyung Jin", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "1556--1565", title = "What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation", year = "2024" }
Used in papers:
Cheng et al., What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation, CVPR, 2024 | paper | code
Dataset(s): IVGaze
@inproceedings{2024_CVPR_Cheng,
author = "Cheng, Yihua and Zhu, Yaning and Wang, Zongji and Hao, Hongquan and Liu, Yongwei and Cheng, Shiqing and Wang, Xi and Chang, Hyung Jin",
booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition",
pages = "1556--1565",
title = "What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation",
year = "2024"
}
Cheng et al., What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation, CVPR, 2024 | paper | code
-
Dataset(s): IVGaze
@inproceedings{2024_CVPR_Cheng, author = "Cheng, Yihua and Zhu, Yaning and Wang, Zongji and Hao, Hongquan and Liu, Yongwei and Cheng, Shiqing and Wang, Xi and Chang, Hyung Jin", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "1556--1565", title = "What Do You See in Vehicle? Comprehensive Vision Solution for In-Vehicle Gaze Estimation", year = "2024" }
SCOUT | paper | link
-
Full name: Task and Context-Modulated Attention
-
Description: Extended annotations for four public datasets for studying drivers’ attention: DR(eye)VE, BDD-A, MAAD, LBW
-
Data: eye-tracking
-
Annotations: action labels, context labels, map information
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Used in papers:
Kotseruba et al., SCOUT+: Towards Practical Task-Driven Drivers’ Gaze Prediction, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_2,
author = "Kotseruba, Iuliia and Tsotsos, John K",
booktitle = "Intelligent Vehicles Symposium (IV)",
title = "{SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction}",
year = "2024"
}
Kotseruba et al., SCOUT+: Towards Practical Task-Driven Drivers’ Gaze Prediction, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_2, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "{SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction}", year = "2024" }
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Kotseruba et al., Understanding and Modeling the Effects of Task and Context on Drivers’ Gaze Allocation, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1337--1344", title = "Understanding and modeling the effects of task and context on drivers’ gaze allocation", year = "2024" }
100-Driver | paper | link
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video
-
Annotations: action labels
@article{2023_T-ITS_Wang, author = "Wang, Jing and Li, Wenjing and Li, Fang and Zhang, Jun and Wu, Zhongcheng and Zhong, Zhun and Sebe, Nicu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "100-Driver: A Large-Scale, Diverse Dataset for Distracted Driver Classification", year = "2023" }
Used in papers:
Li et al., A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer, Trans. ITS, 2024 | paper | code
Dataset(s): SFDDD, 100-Driver
@article{2024_T-ITS_Li_1,
author = "Li, Zhao and Zhao, Xia and Wu, Fuwei and Chen, Dan and Wang, Chang",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer",
year = "2024"
}
Li et al., A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer, Trans. ITS, 2024 | paper | code
-
Dataset(s): SFDDD, 100-Driver
@article{2024_T-ITS_Li_1, author = "Li, Zhao and Zhao, Xia and Wu, Fuwei and Chen, Dan and Wang, Chang", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer", year = "2024" }
AI CITY NDAR | paper | link
-
Full name: AI CITY Naturalistic Driving Action Recognition
-
Description: 594 video clips (90 hours) of 99 drivers performing 16 secondary tasks during driving
-
Data: driver video
@inproceedings{2023_CVPRW_Naphade, author = "Naphade, Milind and Wang, Shuo and Anastasiu, David C and Tang, Zheng and Chang, Ming-Ching and Yao, Yue and Zheng, Liang and Rahman, Mohammed Shaiqur and Arya, Meenakshi S and Sharma, Anuj and others", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "5538--5548", title = "The 7th ai city challenge", year = "2023" }
Used in papers:
Chai et al., Rethinking the Evaluation of Driver Behavior Analysis Approaches, Trans. ITS, 2024 | paper
Dataset(s): SFDDD, AI CITY NDAR
@article{2024_T-ITS_Chai,
author = "Chai, Weiheng and Wang, Jiyang and Chen, Jiajing and Velipasalar, Senem and Sharma, Anuj",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Rethinking the Evaluation of Driver Behavior Analysis Approaches",
year = "2024"
}
Chai et al., Rethinking the Evaluation of Driver Behavior Analysis Approaches, Trans. ITS, 2024 | paper
-
Dataset(s): SFDDD, AI CITY NDAR
@article{2024_T-ITS_Chai, author = "Chai, Weiheng and Wang, Jiyang and Chen, Jiajing and Velipasalar, Senem and Sharma, Anuj", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Rethinking the Evaluation of Driver Behavior Analysis Approaches", year = "2024" }
Zhou et al., Multi View Action Recognition for Distracted Driver Behavior Localization, CVPRW, 2023 | paper | code
-
Dataset(s): AI CITY NDAR
@inproceedings{2023_CVPRW_Zhou, author = "Zhou, Wei and Qian, Yinlong and Jie, Zequn and Ma, Lin", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "5375--5380", title = "Multi view action recognition for distracted driver behavior localization", year = "2023" }
AIDE | paper | link
-
Full name: Assistive Driving Perception Dataset
-
Description: Naturalistic dataset with multi-camera views of drivers performing normal driving and secondary tasks
-
Data: driver video, scene video
-
Annotations: distraction state, action labels
@inproceedings{2023_ICCV_Yang, author = "Yang, Dingkang and Huang, Shuai and Xu, Zhi and Li, Zhenpeng and Wang, Shunli and Li, Mingcheng and Wang, Yuzheng and Liu, Yang and Yang, Kun and Chen, Zhaoyu and others", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "20459--20470", title = "AIDE: A Vision-Driven Multi-View, Multi-Modal, Multi-Tasking Dataset for Assistive Driving Perception", year = "2023" }
DRAMA | paper | link
-
Full name: Driving Risk Assessment Mechanism with A captioning module
-
Description: Driving scenarios recorded in Tokyo, Japan with video and object-level importance labels and captions
-
Data: scene video
-
Annotations: bounding boxes, captions
@inproceedings{2023_WACV_Malla, author = "Malla, Srikanth and Choi, Chiho and Dwivedi, Isht and Choi, Joon Hee and Li, Jiachen", booktitle = "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision", pages = "1043--1052", title = "DRAMA: Joint Risk Localization and Captioning in Driving", year = "2023" }
DrFixD-night | paper | link
-
Full name: Driver Fixation Dataset in night
-
Description: 15 videos of night-time driving with eye-tracking data from 30 participants
-
Data: scene video, eye-tracking
@article{2023_T-ITS_Deng, author = "Deng, Tao and Jiang, Lianfang and Shi, Yi and Wu, Jiang and Wu, Zhangbi and Yan, Shun and Zhang, Xianshi and Yan, Hongmei", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network", year = "2023" }
Used in papers:
Deng et al., Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network, Trans. ITS, 2023 | paper | code
Dataset(s): DrFixD-night, DR(eye)VE
@article{2023_T-ITS_Deng,
author = "Deng, Tao and Jiang, Lianfang and Shi, Yi and Wu, Jiang and Wu, Zhangbi and Yan, Shun and Zhang, Xianshi and Yan, Hongmei",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network",
year = "2023"
}
Deng et al., Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network, Trans. ITS, 2023 | paper | code
-
Dataset(s): DrFixD-night, DR(eye)VE
@article{2023_T-ITS_Deng, author = "Deng, Tao and Jiang, Lianfang and Shi, Yi and Wu, Jiang and Wu, Zhangbi and Yan, Shun and Zhang, Xianshi and Yan, Hongmei", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network", year = "2023" }
SAM-DD | paper | link
-
Full name: Singapore AutoMan@NTU Distracted Driving Dataset
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video, depth
-
Annotations: distraction state
@article{2023_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach", year = "2023" }
Used in papers:
Yang et al., Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Yang,
author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Quantitative identification of driver distraction: A weakly supervised contrastive learning approach",
year = "2023"
}
Yang et al., Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative identification of driver distraction: A weakly supervised contrastive learning approach", year = "2023" }
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan, author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Vision-language models can identify distracted driver behavior from naturalistic videos", year = "2024" }
SynDD1 | paper | link
-
Full name: Synthetic Distracted Driving Dataset
-
Description: Synthetic dataset for machine learning models to detect and analyze drivers' various distracted behavior and different gaze zones.
-
Data: driver video
-
Annotations: gaze area labels, action labels, appearance labels
@article{2023_DiB_Rahman, author = "Rahman, Mohammed Shaiqur and Venkatachalapathy, Archana and Sharma, Anuj and Wang, Jiyang and Gursoy, Senem Velipasalar and Anastasiu, David and Wang, Shuo", journal = "Data in brief", pages = "108793", publisher = "Elsevier", title = "Synthetic distracted driving (syndd1) dataset for analyzing distracted behaviors and various gaze zones of a driver", volume = "46", year = "2023" }
Used in papers:
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan,
author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Vision-language models can identify distracted driver behavior from naturalistic videos",
year = "2024"
}
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan, author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Vision-language models can identify distracted driver behavior from naturalistic videos", year = "2024" }
Doshi et al., Federated Learning-based Driver Activity Recognition for Edge Devices, CVPRW, 2022 | paper
-
Dataset(s): SynDD1
@inproceedings{2022_CVPRW_Doshi, author = "Doshi, Keval and Yilmaz, Yasin", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "3338--3346", title = "Federated learning-based driver activity recognition for edge devices", year = "2022" }
Ding et al., A Coarse-to-Fine Boundary Localization method for Naturalistic Driving Action Recognition, CVPRW, 2022 | paper
-
Dataset(s): SynDD1
@article{2022_CVPRW_Ding, author = "Du, Guanglong and Zhang, Linlin and Su, Kang and Wang, Xueqian and Teng, Shaohua and Liu, Peter X", journal = "Ieee Transactions on Intelligent Transportation Systems", number = "11", pages = "21810--21820", publisher = "IEEE", title = "A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS", volume = "23", year = "2022" }
Alyahya et al., Temporal Driver Action Localization using Action Classification Methods, CVPRW, 2022 | paper
-
Dataset(s): SynDD1
@inproceedings{2022_CVPRW_Alyahya, author = "Alyahya, Munirah and Alghannam, Shahad and Alhussan, Taghreed", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "3319--3326", title = "Temporal Driver Action Localization using Action Classification Methods", year = "2022" }
CoCAtt | paper | link
-
Full name: A Cognitive-Conditioned Driver Attention Dataset
-
Description: Videos of drivers and driver scenes in automated and manual driving conditions with per-frame gaze and distraction annotations
-
Data: driver video, scene video, eye-tracking
-
Annotations: distraction state, car telemetry, intention labels
@inproceedings{2022_ITSC_Shen, author = "Shen, Yuan and Wijayaratne, Niviru and Sriram, Pranav and Hasan, Aamir and Du, Peter and Driggs-Campbell, Katherine", booktitle = "2022 IEEE 25th International Conference on Intelligent Transportation Systems (ITSC)", organization = "IEEE", pages = "32--39", title = "CoCAtt: A Cognitive-Conditioned Driver Attention Dataset", year = "2022" }
Fatigueview | paper | link
-
Description: Multi-camera video dataset for vision-based drowsiness detection.
-
Data: driver video
-
Annotations: facial landmarks, face/hand bounding boxes, head pose, eye status, pose, drowsiness labels
@article{2022_T-ITS_Yang, author = "Yang, Cong and Yang, Zhenyu and Li, Weiyu and See, John", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "FatigueView: A Multi-Camera Video Dataset for Vision-Based Drowsiness Detection", year = "2022" }
LBW | paper | link
-
Full name: Look Both Ways
-
Description: Synchronized videos from scene and driver-facing cameras of drivers performing various maneuvers in traffic
-
Data: driver video, scene video, eye-tracking
@inproceedings{2022_ECCV_Kasahara, author = "Kasahara, Isaac and Stent, Simon and Park, Hyun Soo", booktitle = "Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XIII", organization = "Springer", pages = "126--142", title = "Look Both Ways: Self-supervising Driver Gaze Estimation and Road Scene Saliency", year = "2022" }
Used in papers:
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1,
author = "Kotseruba, Iuliia and Tsotsos, John K",
booktitle = "Intelligent Vehicles Symposium (IV)",
title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention",
year = "2024"
}
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Kasahara et al., Look Both Ways: Self-Supervising Driver Gaze Estimation and Road Scene Saliency, ECCV, 2022 | paper | code
-
Dataset(s): LBW
@inproceedings{2022_ECCV_Kasahara, author = "Kasahara, Isaac and Stent, Simon and Park, Hyun Soo", booktitle = "Computer Vision--ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part XIII", organization = "Springer", pages = "126--142", title = "Look Both Ways: Self-supervising Driver Gaze Estimation and Road Scene Saliency", year = "2022" }
55 Rides | paper | link
-
Description: Naturalistic dataset recorded by four drivers and annotated by three raters to determine distraction states
-
Data: driver video, eye-tracking
-
Annotations: distraction state, head pose
@inproceedings{2021_ETRA_Kubler, author = {K{\"u}bler, Thomas C and Fuhl, Wolfgang and Wagner, Elena and Kasneci, Enkelejda}, booktitle = "ACM Symposium on Eye Tracking Research and Applications", pages = "1--8", title = "55 Rides: attention annotated head and gaze data during naturalistic driving", year = "2021" }
DAD | paper | link
-
Full name: Driver Anomaly Detection
-
Description: Videos of normal and anomalous behaviors (manual/visual distractions) of drivers.
-
Data: driver video
-
Annotations: action labels
@inproceedings{2021_WACV_Kopuklu, author = "Kopuklu, Okan and Zheng, Jiapeng and Xu, Hang and Rigoll, Gerhard", booktitle = "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision", pages = "91--100", title = "Driver anomaly detection: A dataset and contrastive learning approach", year = "2021" }
Used in papers:
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen,
author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior",
year = "2024"
}
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen, author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior", year = "2024" }
Kopuklu et al., Driver Anomaly Detection: A Dataset and Contrastive Learning Approach, WACV, 2021 | paper
-
Dataset(s): DAD
@inproceedings{2021_WACV_Kopuklu, author = "Kopuklu, Okan and Zheng, Jiapeng and Xu, Hang and Rigoll, Gerhard", booktitle = "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision", pages = "91--100", title = "Driver anomaly detection: A dataset and contrastive learning approach", year = "2021" }
MAAD | paper | link
-
Full name: Attended Awareness in Driving
-
Description: A subset of videos from DR(eye)VE annotated with gaze collected in lab conditions.
-
Data: eye-tracking, scene video
-
Annotations: task labels
@inproceedings{2021_ICCVW_Gopinath, author = "Gopinath, Deepak and Rosman, Guy and Stent, Simon and Terahata, Katsuya and Fletcher, Luke and Argall, Brenna and Leonard, John", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "3426--3436", title = {MAAD: A Model and Dataset for" Attended Awareness" in Driving}, year = "2021" }
Used in papers:
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1,
author = "Kotseruba, Iuliia and Tsotsos, John K",
booktitle = "Intelligent Vehicles Symposium (IV)",
title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention",
year = "2024"
}
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Gopinath et al., MAAD: A Model and Dataset for “Attended Awareness” in Driving, ICCVW, 2021 | paper | code
-
Dataset(s): MAAD
@inproceedings{2021_ICCVW_Gopinath, author = "Gopinath, Deepak and Rosman, Guy and Stent, Simon and Terahata, Katsuya and Fletcher, Luke and Argall, Brenna and Leonard, John", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "3426--3436", title = {MAAD: A Model and Dataset for" Attended Awareness" in Driving}, year = "2021" }
BDD-OIA | paper | link
-
Full name: BDD Object Induced Action
-
Description: Extension of the BDD100K dataset with labels for driver actions and explanations for why the action was taken
-
Data: scene video
-
Annotations: bounding boxes, action labels, explanations
@inproceedings{2020_CVPR_Xu, author = "Xu, Yiran and Yang, Xiaoyin and Gong, Lihang and Lin, Hsuan-Chu and Wu, Tz-Ying and Li, Yunsheng and Vasconcelos, Nuno", booktitle = "Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition", pages = "9523--9532", title = "Explainable object-induced action decision for autonomous vehicles", year = "2020" }
Used in papers:
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce,
author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images",
year = "2024"
}
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce, author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images", year = "2024" }
CCD | paper | link
-
Full name: Car Crash Dataset
-
Description: A dataset of YouTube videos with traffic accidents
-
Data: scene video
-
Annotations: bounding boxes, accident labels
@inproceedings{2020_MM_Bao, author = "Bao, Wentao and Yu, Qi and Kong, Yu", booktitle = "Proceedings of the 28th ACM International Conference on Multimedia", pages = "2682--2690", title = "Uncertainty-based traffic accident anticipation with spatio-temporal relational learning", year = "2020" }
Used in papers:
Li et al., Cognitive Traffic Accident Anticipation, ITS Mag., 2024 | paper
@article{2024_ITSM_Li,
author = "Li, Lei-Lei and Fang, Jianwu and Xue, Jianru",
journal = "IEEE Intelligent Transportation Systems Magazine",
publisher = "IEEE",
title = "Cognitive Traffic Accident Anticipation",
year = "2024"
}
Li et al., Cognitive Traffic Accident Anticipation, ITS Mag., 2024 | paper
@article{2024_ITSM_Li, author = "Li, Lei-Lei and Fang, Jianwu and Xue, Jianru", journal = "IEEE Intelligent Transportation Systems Magazine", publisher = "IEEE", title = "Cognitive Traffic Accident Anticipation", year = "2024" }
DGAZE | paper | link
-
Description: A dataset mapping drivers’ gaze to different areas in a static traffic scene in lab conditions
-
Data: driver video, scene video
-
Annotations: bounding boxes
@inproceedings{2020_IROS_Dua, author = "Dua, Isha and John, Thrupthi Ann and Gupta, Riya and Jawahar, CV", booktitle = "IROS", title = "DGAZE: Driver Gaze Mapping on Road", year = "2020" }
DGW | paper | link
-
Full name: Driver Gaze in the Wild
-
Description: Videos of drivers fixating on different areas in the vehicle without constraining their head and eye movements
-
Data: driver video
-
Annotations: gaze area labels
@inproceedings{2021_ICCVW_Ghosh, author = "Ghosh, Shreya and Dhall, Abhinav and Sharma, Garima and Gupta, Sarthak and Sebe, Nicu", booktitle = "ICCVW", title = "Speak2label: Using domain knowledge for creating a large scale driver gaze zone estimation dataset", year = "2021" }
Used in papers:
Stappen et al., X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild, ICMI, 2020 | paper | code
Dataset(s): DGW
@inproceedings{2020_ICMI_Stappen,
author = {Stappen, Lukas and Rizos, Georgios and Schuller, Bj{\"o}rn},
booktitle = "ICMI",
title = "X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild",
year = "2020"
}
Stappen et al., X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild, ICMI, 2020 | paper | code
-
Dataset(s): DGW
@inproceedings{2020_ICMI_Stappen, author = {Stappen, Lukas and Rizos, Georgios and Schuller, Bj{\"o}rn}, booktitle = "ICMI", title = "X-AWARE: ConteXt-AWARE Human-Environment Attention Fusion for Driver Gaze Prediction in the Wild", year = "2020" }
DMD | paper | link
-
Full name: Driving Monitoring Dataset
-
Description: A diverse multi-modal dataset of drivers performing various secondary tasks, observing different regions inside the car, and showing signs of drowsiness recorded on-road and in simulation environment
-
Data: driver video, scene video, vehicle data
-
Annotations: bounding boxes, action labels
@inproceedings{2020_ECCVW_Ortega, author = "Ortega, Juan Diego and Kose, Neslihan and Ca{\\textasciitilde n}as, Paola and Chao, Min-An and Unnervik, Alexander and Nieto, Marcos and Otaegui, Oihana and Salgado, Luis", booktitle = "ECCV", title = "Dmd: A large-scale multi-modal driver monitoring dataset for attention and alertness analysis", year = "2020" }
Used in papers:
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan,
author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Vision-language models can identify distracted driver behavior from naturalistic videos",
year = "2024"
}
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan, author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Vision-language models can identify distracted driver behavior from naturalistic videos", year = "2024" }
Palo et al., Holistic Driver Monitoring: A Multi-Task Approach for In-Cabin Driver Attention Evaluation through Multi-Camera Data, IV, 2024 | paper
-
Dataset(s): DMD
@inproceedings{2024_IV_Palo, author = "Palo, Patitapaban and Nayak, Satyajit and Modhugu, Durga Nagendra Raghava Kumar and Gupta, Kwanit and Uttarkabat, Satarupa", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1361--1366", title = "Holistic Driver Monitoring: A Multi-Task Approach for In-Cabin Driver Attention Evaluation through Multi-Camera Data", year = "2024" }
LISA v2 | paper | link
-
Full name: Laboratory for Intelligent and Safe Automobiles
-
Description: Videos of drivers with and without eyeglasses recorded under different lighting conditions
-
Data: driver video
@inproceedings{2020_IV_Rangesh, author = "Rangesh, Akshay and Zhang, Bowen and Trivedi, Mohan M", booktitle = "IV", title = "Driver gaze estimation in the real world: Overcoming the eyeglass challenge", year = "2020" }
NeuroIV | paper | link
-
Full name: Neuromorphic Vision Meets Intelligent Vehicle
-
Description: Videos of drivers performing secondary tasks, making hand gestures and observing different regions inside the vehicle recorded with DAVIS and depth sensor
-
Data: driver video
@article{2020_T-ITS_Chen, author = {Chen, Guang and Wang, Fa and Li, Weijun and Hong, Lin and Conradt, J{\"o}rg and Chen, Jieneng and Zhang, Zhenyan and Lu, Yiwen and Knoll, Alois}, journal = "IEEE Transactions on Intelligent Transportation Systems", number = "2", pages = "1171--1183", publisher = "IEEE", title = "NeuroIV: Neuromorphic vision meets intelligent vehicle towards safe driving with a new database and baseline evaluations", volume = "23", year = "2020" }
TrafficSaliency | paper | link
-
Description: 16 videos of driving scenes with gaze data of 28 subjects recorded in the lab with eye-tracker
-
Data: eye-tracking, scene video
@article{2020_T-ITS_Deng, author = "Deng, Tao and Yan, Hongmei and Qin, Long and Ngo, Thuyen and Manjunath, BS", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "5", pages = "2146--2154", publisher = "IEEE", title = "{How do drivers allocate their potential attention? Driving fixation prediction via convolutional neural networks}", volume = "21", year = "2019" }
Used in papers:
Li et al., Adaptive Short-Temporal Induced Aware Fusion Network for Predicting Attention Regions Like a Driver, Trans. ITS, 2022 | paper | code
Dataset(s): BDD-A, DADA-2000, TrafficSaliency
@article{2022_T-ITS_Li,
author = "Li, Qiang and Liu, Chunsheng and Chang, Faliang and Li, Shuang and Liu, Hui and Liu, Zehao",
journal = "IEEE Transactions on Intelligent Transportation Systems",
number = "10",
pages = "18695--18706",
publisher = "IEEE",
title = "Adaptive short-temporal induced aware fusion network for predicting attention regions like a driver",
volume = "23",
year = "2022"
}
Li et al., Adaptive Short-Temporal Induced Aware Fusion Network for Predicting Attention Regions Like a Driver, Trans. ITS, 2022 | paper | code
-
Dataset(s): BDD-A, DADA-2000, TrafficSaliency
@article{2022_T-ITS_Li, author = "Li, Qiang and Liu, Chunsheng and Chang, Faliang and Li, Shuang and Liu, Hui and Liu, Zehao", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "10", pages = "18695--18706", publisher = "IEEE", title = "Adaptive short-temporal induced aware fusion network for predicting attention regions like a driver", volume = "23", year = "2022" }
Gan et al., Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes, Trans. ITS, 2022 | paper
-
Dataset(s): BDD-A, DADA-2000, DR(eye)VE, TrafficSaliency
@article{2022_T-ITS_Gan, author = "Gan, Shun and Pei, Xizhe and Ge, Yulong and Wang, Qingfan and Shang, Shi and Li, Shengbo Eben and Nie, Bingbing", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "11", pages = "20912--20925", publisher = "IEEE", title = "Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes", volume = "23", year = "2022" }
Fang et al., DADA: Driver Attention Prediction in Driving Accident Scenarios, Trans. ITS, 2021 | paper | code
-
Dataset(s): TrafficSaliency, DR(eye)VE, DADA-2000
@article{2022_T-ITS_Fang, author = "Fang, Jianwu and Yan, Dingxin and Qiao, Jiahuan and Xue, Jianru and Yu, Hongkai", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "6", pages = "4959--4971", publisher = "IEEE", title = "DADA: Driver attention prediction in driving accident scenarios", volume = "23", year = "2021" }
Deng et al., How Do Drivers Allocate Their Potential Attention? Driving Fixation Prediction via Convolutional Neural Networks, Trans. ITS, 2020 | paper | code
-
Dataset(s): TrafficSaliency
@article{2020_T-ITS_Deng, author = "Deng, Tao and Yan, Hongmei and Qin, Long and Ngo, Thuyen and Manjunath, BS", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "5", pages = "2146--2154", publisher = "IEEE", title = "{How do drivers allocate their potential attention? Driving fixation prediction via convolutional neural networks}", volume = "21", year = "2019" }
Huang et al., Driver Distraction Detection Based on the True Driver’s Focus of Attention, Trans. ITS, 2021 | paper
-
Dataset(s): DADA-2000, TrafficSaliency, BDD-A, DR(eye)VE, private
@article{2021_T-ITS_Huang, author = "Huang, Jianling and Long, Yan and Zhao, Xiaohua", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driver Glance Behavior Modeling Based on Semi-Supervised Clustering and Piecewise Aggregate Representation", year = "2021" }
3MDAD | paper | link
-
Full name: Multimodal Multiview and Multispectral Driver Action Dataset
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video
-
Annotations: action labels, bounding boxes
@inproceedings{2019_CAIP_Jegham, author = "Jegham, Imen and Ben Khalifa, Anouar and Alouani, Ihsen and Mahjoub, Mohamed Ali", booktitle = "Computer Analysis of Images and Patterns: 18th International Conference, CAIP 2019, Salerno, Italy, September 3--5, 2019, Proceedings, Part I 18", organization = "Springer", pages = "518--529", title = "Mdad: A multimodal and multiview in-vehicle driver action dataset", year = "2019" }
Used in papers:
Yang et al., Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Yang,
author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Quantitative identification of driver distraction: A weakly supervised contrastive learning approach",
year = "2023"
}
Yang et al., Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative identification of driver distraction: A weakly supervised contrastive learning approach", year = "2023" }
Kuang et al., MIFI: MultI-Camera Feature Integration for Robust 3D Distracted Driver Activity Recognition, Trans. ITS, 2023 | paper
-
Dataset(s): 3MDAD
@article{2023_T-ITS_Kuang, author = "Kuang, Jian and Li, Wenjing and Li, Fang and Zhang, Jun and Wu, Zhongcheng", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "MIFI: MultI-Camera Feature Integration for Robust 3D Distracted Driver Activity Recognition", year = "2023" }
DADA-2000 | paper | link
-
Full name: Driver Attention in Driving Accident Scenarios
-
Description: 2000 videos of accident videos collected from video hosting websites with eye-tracking data from 20 subjects collected in the lab.
-
Data: eye-tracking, scene video
-
Annotations: bounding boxes, accident category labels
@inproceedings{2019_ITSC_Fang, author = "Fang, Jianwu and Yan, Dingxin and Qiao, Jiahuan and Xue, Jianru and Wang, He and Li, Sen", booktitle = "ITSC", title = "{DADA-2000: Can Driving Accident be Predicted by Driver Attentionƒ Analyzed by A Benchmark}", year = "2019" }
Used in papers:
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce,
author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images",
year = "2024"
}
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce, author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images", year = "2024" }
Li et al., Cognitive Traffic Accident Anticipation, ITS Mag., 2024 | paper
@article{2024_ITSM_Li, author = "Li, Lei-Lei and Fang, Jianwu and Xue, Jianru", journal = "IEEE Intelligent Transportation Systems Magazine", publisher = "IEEE", title = "Cognitive Traffic Accident Anticipation", year = "2024" }
Zhao et al., Gated Driver Attention Predictor, ITSC, 2023 | paper | code
@inproceedings{2023_ITSC_Zhao, author = "Zhao, Tianci and Bai, Xue and Fang, Jianwu and Xue, Jianru", booktitle = "2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)", organization = "IEEE", pages = "270--276", title = "Gated Driver Attention Predictor", year = "2023" }
Zhu et al., Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding, ICCV, 2023 | paper | code
@inproceedings{2023_ICCV_Zhu, author = "Zhu, Pengfei and Qi, Mengshi and Li, Xia and Li, Weijian and Ma, Huadong", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "8558--8568", title = "Unsupervised self-driving attention prediction via uncertainty mining and knowledge embedding", year = "2023" }
Chen et al., FBLNet: FeedBack Loop Network for Driver Attention Prediction, ICCV, 2023 | paper
@inproceedings{2023_ICCV_Chen, author = "Chen, Yilong and Nan, Zhixiong and Xiang, Tao", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "13371--13380", title = "FBLNet: FeedBack Loop Network for Driver Attention Prediction", year = "2023" }
Li et al., Adaptive Short-Temporal Induced Aware Fusion Network for Predicting Attention Regions Like a Driver, Trans. ITS, 2022 | paper | code
-
Dataset(s): BDD-A, DADA-2000, TrafficSaliency
@article{2022_T-ITS_Li, author = "Li, Qiang and Liu, Chunsheng and Chang, Faliang and Li, Shuang and Liu, Hui and Liu, Zehao", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "10", pages = "18695--18706", publisher = "IEEE", title = "Adaptive short-temporal induced aware fusion network for predicting attention regions like a driver", volume = "23", year = "2022" }
Gan et al., Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes, Trans. ITS, 2022 | paper
-
Dataset(s): BDD-A, DADA-2000, DR(eye)VE, TrafficSaliency
@article{2022_T-ITS_Gan, author = "Gan, Shun and Pei, Xizhe and Ge, Yulong and Wang, Qingfan and Shang, Shi and Li, Shengbo Eben and Nie, Bingbing", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "11", pages = "20912--20925", publisher = "IEEE", title = "Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes", volume = "23", year = "2022" }
Fang et al., DADA: Driver Attention Prediction in Driving Accident Scenarios, Trans. ITS, 2021 | paper | code
-
Dataset(s): TrafficSaliency, DR(eye)VE, DADA-2000
@article{2022_T-ITS_Fang, author = "Fang, Jianwu and Yan, Dingxin and Qiao, Jiahuan and Xue, Jianru and Yu, Hongkai", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "6", pages = "4959--4971", publisher = "IEEE", title = "DADA: Driver attention prediction in driving accident scenarios", volume = "23", year = "2021" }
Araluce et al., ARAGAN: A dRiver Attention estimation model based on conditional Generative Adversarial Network, IV, 2022 | paper | code
@inproceedings{2022_IV_Araluce, author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Barea, Rafael and L{\'o}pez-Guill{\'e}n, Elena and Revenga, Pedro", booktitle = "2022 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1066--1072", title = "ARAGAN: A dRiver Attention estimation model based on conditional Generative Adversarial Network", year = "2022" }
Huang et al., Driver Distraction Detection Based on the True Driver’s Focus of Attention, Trans. ITS, 2021 | paper
-
Dataset(s): DADA-2000, TrafficSaliency, BDD-A, DR(eye)VE, private
@article{2021_T-ITS_Huang, author = "Huang, Jianling and Long, Yan and Zhao, Xiaohua", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driver Glance Behavior Modeling Based on Semi-Supervised Clustering and Piecewise Aggregate Representation", year = "2021" }
Drive&Act | paper | link
-
Description: Videos of drivers performing various driving- and non-driving-related tasks
-
Data: driver video
-
Annotations: semantic maps, action labels
@inproceedings{2019_ICCV_Martin, author = "Martin, Manuel and Roitberg, Alina and Haurilet, Monica and Horne, Matthias and Rei{\ss}, Simon and Voit, Michael and Stiefelhagen, Rainer", booktitle = "ICCV", title = "Drive\\&act: A multi-modal dataset for fine-grained driver behavior recognition in autonomous vehicles", year = "2019" }
Used in papers:
Pardo-Decimavilla et al., Do You Act Like You Talk? Exploring Pose-based Driver Action Classification with Speech Recognition Networks, IV, 2024 | paper | code
Dataset(s): Drive&Act
@inproceedings{2024_IV_Pardo-Decimavilla,
author = "Pardo-Decimavilla, Pablo and Bergasa, Luis M and Montiel-Mar{\'\i}n, Santiago and Antunes, Miguel and Llamazares, {\'A}ngel",
booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)",
organization = "IEEE",
pages = "1395--1400",
title = "Do You Act Like You Talk? Exploring Pose-based Driver Action Classification with Speech Recognition Networks",
year = "2024"
}
Pardo-Decimavilla et al., Do You Act Like You Talk? Exploring Pose-based Driver Action Classification with Speech Recognition Networks, IV, 2024 | paper | code
-
Dataset(s): Drive&Act
@inproceedings{2024_IV_Pardo-Decimavilla, author = "Pardo-Decimavilla, Pablo and Bergasa, Luis M and Montiel-Mar{\'\i}n, Santiago and Antunes, Miguel and Llamazares, {\'A}ngel", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1395--1400", title = "Do You Act Like You Talk? Exploring Pose-based Driver Action Classification with Speech Recognition Networks", year = "2024" }
Tanama et al., Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments, IROS, 2023 | paper | code
-
Dataset(s): Drive&Act
@inproceedings{2023_IROS_Tanama, author = "Tanama, Calvin and Peng, Kunyu and Marinov, Zdravko and Stiefelhagen, Rainer and Roitberg, Alina", booktitle = "2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", organization = "IEEE", pages = "5479--5486", title = "Quantized Distillation: Optimizing Driver Activity Recognition Models for Resource-Constrained Environments", year = "2023" }
Peng et al., TransDARC: Transformer-based Driver Activity Recognition with Latent Space Feature Calibration, IROS, 2022 | paper | code
-
Dataset(s): Drive&Act
@inproceedings{2022_IROS_Peng, author = "Peng, Kunyu and Roitberg, Alina and Yang, Kailun and Zhang, Jiaming and Stiefelhagen, Rainer", booktitle = "2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)", organization = "IEEE", pages = "278--285", title = "TransDARC: Transformer-based Driver Activity Recognition with Latent Space Feature Calibration", year = "2022" }
Pizarro et al., DRVMon-VM: Distracted driver recognition using large pre-trained video transformers, IV, 2024 | paper
-
Dataset(s): Drive&Act
@inproceedings{2024_IV_Pizarro, author = "Pizarro, Ricardo and Bergasa, Luis M and Baumela, Luis and Buenaposada, Jos{\'e} M and Barea, Rafael", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1901--1906", title = "DRVMon-VM: Distracted driver recognition using large pre-trained video transformers", year = "2024" }
Morales-Alvarez et al., On Transferability of Driver Observation Models from Simulated to Real Environments in Autonomous Cars, ITSC, 2023 | paper
-
Dataset(s): Drive&Act, private
@inproceedings{2023_ITSC_Morales-Alvarez, author = "Morales-Alvarez, Walter and Certad, Novel and Roitberg, Alina and Stiefelhagen, Rainer and Olaverri-Monreal, Cristina", booktitle = "2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)", organization = "IEEE", pages = "3129--3134", title = "On Transferability of Driver Observation Models from Simulated to Real Environments in Autonomous Cars", year = "2023" }
EBDD | paper | link
-
Full name: EEE BUET Distracted Driving Dataset
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video
-
Annotations: action labels, bounding boxes
@article{2019_TCSVT_Billah, author = "Billah, Tashrif and Rahman, SM Mahbubur and Ahmad, M Omair and Swamy, MNS", journal = "IEEE Transactions on Circuits and Systems for Video Technology", number = "4", pages = "1048--1062", publisher = "IEEE", title = "Recognizing distractions for assistive driving by tracking body parts", volume = "29", year = "2018" }
H3D | paper | link
-
Full name: H3D Honda 3D Dataset
-
Description: A subset of videos from HDD dataset with 3D bounding boxes and object ids for tracking
-
Data: driver video, vehicle data
-
Annotations: bounding boxes
@inproceedings{2019_ICRA_Patil, author = "Patil, Abhishek and Malla, Srikanth and Gang, Haiming and Chen, Yi-Ting", booktitle = "2019 International Conference on Robotics and Automation (ICRA)", organization = "IEEE", pages = "9552--9557", title = "The h3d dataset for full-surround 3d multi-object detection and tracking in crowded urban scenes", year = "2019" }
Used in papers:
Li et al., Important Object Identification with Semi-Supervised Learning for Autonomous Driving, ICRA, 2022 | paper
Dataset(s): H3D
@inproceedings{2022_ICRA_Li,
author = "Li, Jiachen and Gang, Haiming and Ma, Hengbo and Tomizuka, Masayoshi and Choi, Chiho",
booktitle = "2022 International Conference on Robotics and Automation (ICRA)",
organization = "IEEE",
pages = "2913--2919",
title = "Important object identification with semi-supervised learning for autonomous driving",
year = "2022"
}
Li et al., Important Object Identification with Semi-Supervised Learning for Autonomous Driving, ICRA, 2022 | paper
-
Dataset(s): H3D
@inproceedings{2022_ICRA_Li, author = "Li, Jiachen and Gang, Haiming and Ma, Hengbo and Tomizuka, Masayoshi and Choi, Chiho", booktitle = "2022 International Conference on Robotics and Automation (ICRA)", organization = "IEEE", pages = "2913--2919", title = "Important object identification with semi-supervised learning for autonomous driving", year = "2022" }
HAD | paper | link
-
Full name: HAD HRI Advice Dataset
-
Description: A subset of videos from HDD naturalistic dataset annotated with textual advice containing 1) goals – where the vehicle should move and 2) attention – where the vehicle should look
-
Data: scene video, vehicle data
-
Annotations: goal and attention labels
@inproceedings{2019_CVPR_Kim, author = "Kim, Jinkyu and Misu, Teruhisa and Chen, Yi-Ting and Tawari, Ashish and Canny, John", booktitle = "CVPR", title = "Grounding human-to-vehicle advice for self-driving vehicles", year = "2019" }
Used in papers:
Kim et al., Grounding Human-to-Vehicle Advice for Self-driving Vehicles, CVPR, 2019 | paper
Dataset(s): HAD
@inproceedings{2019_CVPR_Kim,
author = "Kim, Jinkyu and Misu, Teruhisa and Chen, Yi-Ting and Tawari, Ashish and Canny, John",
booktitle = "CVPR",
title = "Grounding human-to-vehicle advice for self-driving vehicles",
year = "2019"
}
Kim et al., Grounding Human-to-Vehicle Advice for Self-driving Vehicles, CVPR, 2019 | paper
-
Dataset(s): HAD
@inproceedings{2019_CVPR_Kim, author = "Kim, Jinkyu and Misu, Teruhisa and Chen, Yi-Ting and Tawari, Ashish and Canny, John", booktitle = "CVPR", title = "Grounding human-to-vehicle advice for self-driving vehicles", year = "2019" }
PRORETA 4 | paper | link
-
Description: Videos of traffic scenes recorded in instrumented vehicle with driver’s gaze data for evaluating accuracy of detecting driver’s current object of fixation
-
Data: eye-tracking, driver video, scene video
@inproceedings{2019_IV_Schwehr, author = "Schwehr, Julian and Knaust, Moritz and Willert, Volker", booktitle = "IV", title = "How to evaluate object-of-fixation detection", year = "2019" }
Used in papers:
RLDD | paper | link
-
Full name: Real-Life Drowsiness Datase
-
Description: Crowdsourced videos of people in various states of drowsiness recorded in indoor environments
-
Data: driver video
-
Annotations: drowsiness labels
@inproceedings{2019_CVPRW_Ghoddoosian, author = "Ghoddoosian, Reza and Galib, Marnim and Athitsos, Vassilis", booktitle = "CVPRW", title = "A realistic dataset and baseline temporal model for early drowsiness detection", year = "2019" }
Used in papers:
Du et al., A Multimodal Fusion Fatigue Driving Detection Method Based on Heart Rate and PERCLOS, Trans. ITS, 2022 | paper
Dataset(s): RLDD
@article{2022_T-ITS_Du,
author = "Du, Guanglong and Zhang, Linlin and Su, Kang and Wang, Xueqian and Teng, Shaohua and Liu, Peter X",
journal = "Ieee Transactions on Intelligent Transportation Systems",
number = "11",
pages = "21810--21820",
publisher = "IEEE",
title = "A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS",
volume = "23",
year = "2022"
}
Du et al., A Multimodal Fusion Fatigue Driving Detection Method Based on Heart Rate and PERCLOS, Trans. ITS, 2022 | paper
-
Dataset(s): RLDD
@article{2022_T-ITS_Du, author = "Du, Guanglong and Zhang, Linlin and Su, Kang and Wang, Xueqian and Teng, Shaohua and Liu, Peter X", journal = "Ieee Transactions on Intelligent Transportation Systems", number = "11", pages = "21810--21820", publisher = "IEEE", title = "A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS", volume = "23", year = "2022" }
Du et al., A Multimodal Fusion Fatigue Driving Detection Method Based on Heart Rate and PERCLOS, Trans. ITS, 2021 | paper
-
Dataset(s): RLDD
@article{2021_T-ITS_Du, author = "Du, Guanglong and Zhang, Linlin and Su, Kang and Wang, Xueqian and Teng, Shaohua and Liu, Peter X", journal = "Ieee Transactions on Intelligent Transportation Systems", number = "11", pages = "21810--21820", publisher = "IEEE", title = "A multimodal fusion fatigue driving detection method based on heart rate and PERCLOS", volume = "23", year = "2022" }
Ghoddoosian et al., A Realistic Dataset and Baseline Temporal Model for Early Drowsiness Detection, CVPRW, 2019 | paper
-
Dataset(s): RLDD
@inproceedings{2019_CVPRW_Ghoddoosian, author = "Ghoddoosian, Reza and Galib, Marnim and Athitsos, Vassilis", booktitle = "CVPRW", title = "A realistic dataset and baseline temporal model for early drowsiness detection", year = "2019" }
BDD-A | paper | link
-
Full name: Berkeley Deep Drive-A (Attention) Dataset
-
Description: A set of short video clips extracted from the Berkeley Deep Drive (BDD) dataset with additional eye-tracking data collected in the lab from 45 subjects
-
Data: eye-tracking, scene video, vehicle data
@inproceedings{2018_ACCV_Xia, author = "Xia, Ye and Zhang, Danqing and Kim, Jinkyu and Nakayama, Ken and Zipser, Karl and Whitney, David", booktitle = "ACCV", title = "Predicting driver attention in critical situations", year = "2018" }
Used in papers:
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce,
author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images",
year = "2024"
}
Araluce et al., Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Araluce, author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Llamazares, {\'A}ngel and L{\'o}pez-Guill{\'e}n, Elena", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Leveraging Driver Attention for an End-to-End Explainable Decision-Making From Frontal Images", year = "2024" }
Hu et al., Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Hu, author = "Hu, Zhongxu and Cai, Yuxin and Li, Qinghua and Su, Kui and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking", year = "2024" }
Kotseruba et al., SCOUT+: Towards Practical Task-Driven Drivers’ Gaze Prediction, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_2, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "{SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction}", year = "2024" }
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Adhikari et al., Comparative Study of Attention among Drivers with Varying Driving Experience, IV, 2024 | paper
-
Dataset(s): BDD-A, private
@inproceedings{2024_IV_Adhikari, author = "Adhikari, Bikram and Duri{\'c}, Zoran and Wijesekera, Duminda and Yu, Bo", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1353--1360", title = "Comparative Study of Attention among Drivers with Varying Driving Experience", year = "2024" }
Zhao et al., Gated Driver Attention Predictor, ITSC, 2023 | paper | code
@inproceedings{2023_ITSC_Zhao, author = "Zhao, Tianci and Bai, Xue and Fang, Jianwu and Xue, Jianru", booktitle = "2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC)", organization = "IEEE", pages = "270--276", title = "Gated Driver Attention Predictor", year = "2023" }
Zhu et al., Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding, ICCV, 2023 | paper | code
@inproceedings{2023_ICCV_Zhu, author = "Zhu, Pengfei and Qi, Mengshi and Li, Xia and Li, Weijian and Ma, Huadong", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "8558--8568", title = "Unsupervised self-driving attention prediction via uncertainty mining and knowledge embedding", year = "2023" }
Chen et al., FBLNet: FeedBack Loop Network for Driver Attention Prediction, ICCV, 2023 | paper
@inproceedings{2023_ICCV_Chen, author = "Chen, Yilong and Nan, Zhixiong and Xiang, Tao", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "13371--13380", title = "FBLNet: FeedBack Loop Network for Driver Attention Prediction", year = "2023" }
Li et al., Adaptive Short-Temporal Induced Aware Fusion Network for Predicting Attention Regions Like a Driver, Trans. ITS, 2022 | paper | code
-
Dataset(s): BDD-A, DADA-2000, TrafficSaliency
@article{2022_T-ITS_Li, author = "Li, Qiang and Liu, Chunsheng and Chang, Faliang and Li, Shuang and Liu, Hui and Liu, Zehao", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "10", pages = "18695--18706", publisher = "IEEE", title = "Adaptive short-temporal induced aware fusion network for predicting attention regions like a driver", volume = "23", year = "2022" }
Gan et al., Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes, Trans. ITS, 2022 | paper
-
Dataset(s): BDD-A, DADA-2000, DR(eye)VE, TrafficSaliency
@article{2022_T-ITS_Gan, author = "Gan, Shun and Pei, Xizhe and Ge, Yulong and Wang, Qingfan and Shang, Shi and Li, Shengbo Eben and Nie, Bingbing", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "11", pages = "20912--20925", publisher = "IEEE", title = "Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes", volume = "23", year = "2022" }
Araluce et al., ARAGAN: A dRiver Attention estimation model based on conditional Generative Adversarial Network, IV, 2022 | paper | code
@inproceedings{2022_IV_Araluce, author = "Araluce, Javier and Bergasa, Luis M and Oca{\\textasciitilde n}a, Manuel and Barea, Rafael and L{\'o}pez-Guill{\'e}n, Elena and Revenga, Pedro", booktitle = "2022 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1066--1072", title = "ARAGAN: A dRiver Attention estimation model based on conditional Generative Adversarial Network", year = "2022" }
Pal et al., “Looking at the right stuff” - Guided semantic-gaze for autonomous driving, CVPR, 2020 | paper | code
@inproceedings{2020_CVPR_Pal, author = "Pal, Anwesan and Mondal, Sayan and Christensen, Henrik I", booktitle = "CVPR", title = {{" Looking at the Right Stuff"-Guided Semantic-Gaze for Autonomous Driving}}, year = "2020" }
Xia et al., Predicting Driver Attention in Critical Situations, ACCV, 2018 | paper | code
-
Dataset(s): BDD-A
@inproceedings{2018_ACCV_Xia, author = "Xia, Ye and Zhang, Danqing and Kim, Jinkyu and Nakayama, Ken and Zipser, Karl and Whitney, David", booktitle = "ACCV", title = "Predicting driver attention in critical situations", year = "2018" }
Huang et al., Driver Distraction Detection Based on the True Driver’s Focus of Attention, Trans. ITS, 2021 | paper
-
Dataset(s): DADA-2000, TrafficSaliency, BDD-A, DR(eye)VE, private
@article{2021_T-ITS_Huang, author = "Huang, Jianling and Long, Yan and Zhao, Xiaohua", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driver Glance Behavior Modeling Based on Semi-Supervised Clustering and Piecewise Aggregate Representation", year = "2021" }
Xia et al., Periphery-Fovea Multi-Resolution Driving Model Guided by Human Attention, WACV, 2020 | paper | code
@inproceedings{2020_WACV_Xia, author = "Xia, Ye and Kim, Jinkyu and Canny, John and Zipser, Karl and Canas-Bajo, Teresa and Whitney, David", booktitle = "WACV", title = "Periphery-fovea multi-resolution driving model guided by human attention", year = "2020" }
BDD-X | paper | link
-
Full name: Berkeley Deep Drive-X (eXplanation) Dataset
-
Description: A subset of videos from BDD dataset annotated with textual descriptions of actions performed by the vehicle and explanations justifying those actions
-
Data: scene video, vehicle data
-
Annotations: action explanations
@inproceedings{2018_ECCV_Kim, author = "Kim, Jinkyu and Rohrbach, Anna and Darrell, Trevor and Canny, John and Akata, Zeynep", booktitle = "ECCV", title = "Textual explanations for self-driving vehicles", year = "2018" }
Used in papers:
Xia et al., Periphery-Fovea Multi-Resolution Driving Model Guided by Human Attention, WACV, 2020 | paper | code
@inproceedings{2020_WACV_Xia,
author = "Xia, Ye and Kim, Jinkyu and Canny, John and Zipser, Karl and Canas-Bajo, Teresa and Whitney, David",
booktitle = "WACV",
title = "Periphery-fovea multi-resolution driving model guided by human attention",
year = "2020"
}
Xia et al., Periphery-Fovea Multi-Resolution Driving Model Guided by Human Attention, WACV, 2020 | paper | code
@inproceedings{2020_WACV_Xia, author = "Xia, Ye and Kim, Jinkyu and Canny, John and Zipser, Karl and Canas-Bajo, Teresa and Whitney, David", booktitle = "WACV", title = "Periphery-fovea multi-resolution driving model guided by human attention", year = "2020" }
Kim et al., Advisable Learning for Self-driving Vehicles by Internalizing Observation-to-Action Rules, CVPR, 2020 | paper | code
-
Dataset(s): BDD-X, CARLA
@inproceedings{2020_CVPR_Kim, author = "Kim, Jinkyu and Moon, Suhong and Rohrbach, Anna and Darrell, Trevor and Canny, John", booktitle = "CVPR", title = "Advisable learning for self-driving vehicles by internalizing observation-to-action rules", year = "2020" }
Kim et al., Textual Explanations for Self-Driving Vehicles, ECCV, 2018 | paper | code
-
Dataset(s): BDD-X
@inproceedings{2018_ECCV_Kim, author = "Kim, Jinkyu and Rohrbach, Anna and Darrell, Trevor and Canny, John and Akata, Zeynep", booktitle = "ECCV", title = "Textual explanations for self-driving vehicles", year = "2018" }
DR(eye)VE | paper | link
-
Description: Driving videos recorded on-road with corresponding gaze data of the driver
-
Data: eye-tracking, scene video, vehicle data
-
Annotations: weather and road type labels
@article{2018_PAMI_Palazzi, author = "Palazzi, Andrea and Abati, Davide and Solera, Francesco and Cucchiara, Rita and others", journal = "IEEE TPAMI", number = "7", pages = "1720--1733", title = "{Predicting the Driver's Focus of Attention: the DR (eye) VE Project}", volume = "41", year = "2018" }
Used in papers:
Hu et al., Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Hu,
author = "Hu, Zhongxu and Cai, Yuxin and Li, Qinghua and Su, Kui and Lv, Chen",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking",
year = "2024"
}
Hu et al., Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Hu, author = "Hu, Zhongxu and Cai, Yuxin and Li, Qinghua and Su, Kui and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Context-Aware Driver Attention Estimation Using Multi-Hierarchy Saliency Fusion With Gaze Tracking", year = "2024" }
Kotseruba et al., SCOUT+: Towards Practical Task-Driven Drivers’ Gaze Prediction, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_2, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "{SCOUT+: Towards Practical Task-Driven Drivers' Gaze Prediction}", year = "2024" }
Kotseruba et al., Data Limitations for Modeling Top-Down Effects on Drivers’ Attention, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba_1, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "Intelligent Vehicles Symposium (IV)", title = "Data Limitations for Modeling Top-Down Effects on Drivers' Attention", year = "2024" }
Kotseruba et al., Understanding and Modeling the Effects of Task and Context on Drivers’ Gaze Allocation, IV, 2024 | paper | code
@inproceedings{2024_IV_Kotseruba, author = "Kotseruba, Iuliia and Tsotsos, John K", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "1337--1344", title = "Understanding and modeling the effects of task and context on drivers’ gaze allocation", year = "2024" }
Deng et al., Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network, Trans. ITS, 2023 | paper | code
-
Dataset(s): DrFixD-night, DR(eye)VE
@article{2023_T-ITS_Deng, author = "Deng, Tao and Jiang, Lianfang and Shi, Yi and Wu, Jiang and Wu, Zhangbi and Yan, Shun and Zhang, Xianshi and Yan, Hongmei", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driving Visual Saliency Prediction of Dynamic Night Scenes via a Spatio-Temporal Dual-Encoder Network", year = "2023" }
Zhu et al., Unsupervised Self-Driving Attention Prediction via Uncertainty Mining and Knowledge Embedding, ICCV, 2023 | paper | code
@inproceedings{2023_ICCV_Zhu, author = "Zhu, Pengfei and Qi, Mengshi and Li, Xia and Li, Weijian and Ma, Huadong", booktitle = "Proceedings of the IEEE/CVF International Conference on Computer Vision", pages = "8558--8568", title = "Unsupervised self-driving attention prediction via uncertainty mining and knowledge embedding", year = "2023" }
Gan et al., Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes, Trans. ITS, 2022 | paper
-
Dataset(s): BDD-A, DADA-2000, DR(eye)VE, TrafficSaliency
@article{2022_T-ITS_Gan, author = "Gan, Shun and Pei, Xizhe and Ge, Yulong and Wang, Qingfan and Shang, Shi and Li, Shengbo Eben and Nie, Bingbing", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "11", pages = "20912--20925", publisher = "IEEE", title = "Multisource Adaption for Driver Attention Prediction in Arbitrary Driving Scenes", volume = "23", year = "2022" }
Fang et al., DADA: Driver Attention Prediction in Driving Accident Scenarios, Trans. ITS, 2021 | paper | code
-
Dataset(s): TrafficSaliency, DR(eye)VE, DADA-2000
@article{2022_T-ITS_Fang, author = "Fang, Jianwu and Yan, Dingxin and Qiao, Jiahuan and Xue, Jianru and Yu, Hongkai", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "6", pages = "4959--4971", publisher = "IEEE", title = "DADA: Driver attention prediction in driving accident scenarios", volume = "23", year = "2021" }
Pal et al., “Looking at the right stuff” - Guided semantic-gaze for autonomous driving, CVPR, 2020 | paper | code
@inproceedings{2020_CVPR_Pal, author = "Pal, Anwesan and Mondal, Sayan and Christensen, Henrik I", booktitle = "CVPR", title = {{" Looking at the Right Stuff"-Guided Semantic-Gaze for Autonomous Driving}}, year = "2020" }
Ning et al., An Efficient Model for Driving Focus of Attention Prediction using Deep Learning, ITSC, 2019 | paper
-
Dataset(s): DR(eye)VE
@inproceedings{2019_ITSC_Ning, author = "Ning, Minghao and Lu, Chao and Gong, Jianwei", booktitle = "ITCS", title = "{An Efficient Model for Driving Focus of Attention Prediction using Deep Learning}", year = "2019" }
Palazzi et al., Predicting the Driver’s Focus of Attention: the DR(eye)VE Project, PAMI, 2018 | paper | code
-
Dataset(s): DR(eye)VE
@article{2018_PAMI_Palazzi, author = "Palazzi, Andrea and Abati, Davide and Solera, Francesco and Cucchiara, Rita and others", journal = "IEEE TPAMI", number = "7", pages = "1720--1733", title = "{Predicting the Driver's Focus of Attention: the DR (eye) VE Project}", volume = "41", year = "2018" }
Tawari et al., A Computational Framework for Driver’s Visual Attention Using A Fully Convolutional Architecture, IV, 2017 | paper
-
Dataset(s): DR(eye)VE
@inproceedings{2017_IV_Tawari, author = "Tawari, Ashish and Kang, Byeongkeun", booktitle = "IV", title = "A computational framework for driver's visual attention using a fully convolutional architecture", year = "2017" }
Palazzi et al., Learning Where to Attend Like a Human Driver, IV, 2017 | paper | code
-
Dataset(s): DR(eye)VE
@inproceedings{2017_IV_Palazzi, author = "Palazzi, Andrea and Solera, Francesco and Calderara, Simone and Alletto, Stefano and Cucchiara, Rita", booktitle = "IV", title = "Learning where to attend like a human driver", year = "2017" }
Huang et al., Driver Distraction Detection Based on the True Driver’s Focus of Attention, Trans. ITS, 2021 | paper
-
Dataset(s): DADA-2000, TrafficSaliency, BDD-A, DR(eye)VE, private
@article{2021_T-ITS_Huang, author = "Huang, Jianling and Long, Yan and Zhao, Xiaohua", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driver Glance Behavior Modeling Based on Semi-Supervised Clustering and Piecewise Aggregate Representation", year = "2021" }
Xia et al., Periphery-Fovea Multi-Resolution Driving Model Guided by Human Attention, WACV, 2020 | paper | code
@inproceedings{2020_WACV_Xia, author = "Xia, Ye and Kim, Jinkyu and Canny, John and Zipser, Karl and Canas-Bajo, Teresa and Whitney, David", booktitle = "WACV", title = "Periphery-fovea multi-resolution driving model guided by human attention", year = "2020" }
HDD | paper | link
-
Full name: HDD HRI Driving Dataset
-
Description: A large naturalistic driving dataset with driving footage, vehicle telemetry and annotations for vehicle actions and their justifications
-
Data: scene video, vehicle data
-
Annotations: bounding boxes, action labels
@inproceedings{2018_CVPR_Ramanishka, author = "Ramanishka, Vasili and Chen, Yi-Ting and Misu, Teruhisa and Saenko, Kate", booktitle = "CVPR", title = "Toward driving scene understanding: A dataset for learning driver behavior and causal reasoning", year = "2018" }
AUCD2 | paper | link
-
Full name: American University in Cairo (AUC) Distracted Driver’s Dataset
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video
-
Annotations: action labels
@inproceedings{2017_NeurIPS_Abouelnaga, author = "Abouelnaga, Yehya and Eraqi, Hesham M. and Moustafa, Mohamed N.", booktitle = "NeurIPS Workshop on Machine Learning for Intelligent Transportation Systems", title = "eal-time Distracted Driver Posture Classification", year = "2017" }
Used in papers:
Li et al., Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Li_2,
author = "Li, Guofa and Wang, Guanglei and Guo, Zizheng and Liu, Qing and Luo, Xiyuan and Yuan, Bangwei and Li, Mingrui and Yang, Lu",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification",
year = "2024"
}
Li et al., Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Li_2, author = "Li, Guofa and Wang, Guanglei and Guo, Zizheng and Liu, Qing and Luo, Xiyuan and Yuan, Bangwei and Li, Mingrui and Yang, Lu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification", year = "2024" }
Yang et al., Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative identification of driver distraction: A weakly supervised contrastive learning approach", year = "2023" }
Li et al., Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Li, author = "Li, Guofa and Wang, Guanglei and Guo, Zizheng and Liu, Qing and Luo, Xiyuan and Yuan, Bangwei and Li, Mingrui and Yang, Lu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification", year = "2024" }
Ma et al., ViT-DD: Multi-Task Vision Transformer for Semi-Supervised Driver Distraction Detection, IV, 2024 | paper | code
@inproceedings{2024_IV_Ma, author = "Ma, Yunsheng and Wang, Ziran", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "417--423", title = "Vit-dd: Multi-task vision transformer for semi-supervised driver distraction detection", year = "2024" }
Mittal et al., CAT-CapsNet: A Convolutional and Attention Based Capsule Network to Detect the Driver’s Distraction, Trans. ITS, 2023 | paper
-
Dataset(s): AUCD2, Statefarm
@article{2023_T-ITS_Mittal, author = "Mittal, Himanshu and Verma, Bindu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "CAT-CapsNet: A Convolutional and Attention Based Capsule Network to Detect the Driver’s Distraction", year = "2023" }
C42CN | paper | link
-
Description: A multi-modal dataset acquired in a controlled experiment on a driving simulator under 4 conditions: no distraction, cognitive, emotional and sensorimotor distraction.
-
Data: eye-tracking, scene video, physiological signal
@article{2017_NatSciData_Taamneh, author = "Taamneh, Salah and Tsiamyrtzis, Panagiotis and Dcosta, Malcolm and Buddharaju, Pradeep and Khatri, Ashik and Manser, Michael and Ferris, Thomas and Wunderlich, Robert and Pavlidis, Ioannis", journal = "Scientific Data", pages = "170110", title = "A multimodal dataset for various forms of distracted driving", volume = "4", year = "2017" }
Used in papers:
Chen et al., Fine-Grained Detection of Driver Distraction Based on Neural Architecture Search, Trans. ITS, 2021 | paper
Dataset(s): C42CN
@article{2021_T-ITS_Chen,
author = "Chen, Jie and Jiang, YaNan and Huang, ZhiXiang and Guo, XiaoHui and Wu, BoCai and Sun, Long and Wu, Tao",
journal = "IEEE Transactions on Intelligent Transportation Systems",
title = "Fine-Grained Detection of Driver Distraction Based on Neural Architecture Search",
year = "2021"
}
Chen et al., Fine-Grained Detection of Driver Distraction Based on Neural Architecture Search, Trans. ITS, 2021 | paper
-
Dataset(s): C42CN
@article{2021_T-ITS_Chen, author = "Chen, Jie and Jiang, YaNan and Huang, ZhiXiang and Guo, XiaoHui and Wu, BoCai and Sun, Long and Wu, Tao", journal = "IEEE Transactions on Intelligent Transportation Systems", title = "Fine-Grained Detection of Driver Distraction Based on Neural Architecture Search", year = "2021" }
DDD | paper | link
-
Full name: Driver Drowsiness Detection Dataset
-
Description: Videos of human subjects simulating different levels of drowsiness while driving in a simulator
-
Data: driver video
-
Annotations: drowsiness labels
@inproceedings{2017_ACCV_Weng, author = "Weng, Ching-Hua and Lai, Ying-Hsiu and Lai, Shang-Hong", booktitle = "ACCV", title = "Driver drowsiness detection via a hierarchical temporal deep belief network", year = "2016" }
Used in papers:
Chiou et al., Driver Monitoring Using Sparse Representation With Part-Based Temporal Face Descriptors, Trans. ITS, 2019 | paper
@article{2019_T-ITS_Chiou,
author = "Chiou, Chien-Yu and Wang, Wei-Cheng and Lu, Shueh-Chou and Huang, Chun-Rong and Chung, Pau-Choo and Lai, Yun-Yang",
journal = "IEEE Transactions on Intelligent Transportation Systems",
number = "1",
pages = "346--361",
publisher = "IEEE",
title = "Driver monitoring using sparse representation with part-based temporal face descriptors",
volume = "21",
year = "2019"
}
Chiou et al., Driver Monitoring Using Sparse Representation With Part-Based Temporal Face Descriptors, Trans. ITS, 2019 | paper
@article{2019_T-ITS_Chiou, author = "Chiou, Chien-Yu and Wang, Wei-Cheng and Lu, Shueh-Chou and Huang, Chun-Rong and Chung, Pau-Choo and Lai, Yun-Yang", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "1", pages = "346--361", publisher = "IEEE", title = "Driver monitoring using sparse representation with part-based temporal face descriptors", volume = "21", year = "2019" }
Wu et al., Driver Drowsiness Detection Based on Joint Human Face and Facial Landmark Localization With Cheap Operations, Trans. ITS, 2024 | paper
-
Dataset(s): DDD
@article{2024_T-ITS_Wu, author = "Wu, Qingtian and Li, Nannan and Zhang, Liming and Yu, Fei Richard", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Driver Drowsiness Detection Based on Joint Human Face and Facial Landmark Localization With Cheap Operations", year = "2024" }
Yang et al., Video-Based Driver Drowsiness Detection With Optimised Utilization of Key Facial Features, Trans. ITS, 2024 | paper
@article{2023_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach", year = "2023" }
Tufekci et al., Detecting Driver Drowsiness as an Anomaly Using LSTM Autoencoders, ECCVW, 2022 | paper
-
Dataset(s): DDD
@inproceedings{2022_ECCVW_Tufekci, author = {T{\"u}fekci, G{\"u}lin and Kayaba{\c{s}}{\i}, Alper and Akag{\"u}nd{\"u}z, Erdem and Ulusoy, {\.I}lkay}, booktitle = "Computer Vision--ECCV 2022 Workshops: Tel Aviv, Israel, October 23--27, 2022, Proceedings, Part VI", organization = "Springer", pages = "549--559", title = "Detecting Driver Drowsiness as an Anomaly Using LSTM Autoencoders", year = "2023" }
Ahmed et al., Intelligent Driver Drowsiness Detection for Traffic Safety Based on Multi CNN Deep Model and Facial Subsampling, Trans. ITS, 2021 | paper
-
Dataset(s): DDD
@article{2021_T-ITS_Ahmed, author = "Ahmed, Muneeb and Masood, Sarfaraz and Ahmad, Musheer and Abd El-Latif, Ahmed A", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "10", pages = "19743--19752", publisher = "IEEE", title = "Intelligent driver drowsiness detection for traffic safety based on multi CNN deep model and facial subsampling", volume = "23", year = "2021" }
Huang et al., RF-DCM: Multi-Granularity Deep Convolutional Model Based on Feature Recalibration and Fusion for Driver Fatigue Detection, Trans. ITS, 2020 | paper
-
Dataset(s): DDD
@article{2020_T-ITS_Huang, author = "Huang, Rui and Wang, Yan and Li, Zijian and Lei, Zeyu and Xu, Yufan", journal = "IEEE Transactions on Intelligent Transportation Systems", title = "RF-DCM: Multi-Granularity Deep Convolutional Model Based on Feature Recalibration and Fusion for Driver Fatigue Detection", year = "2020" }
Vijay et al., Real-Time Driver Drowsiness Detection using Facial Action Units, ICPR, 2020 | paper
-
Dataset(s): DDD
@inproceedings{2020_ICPR_Vijay, author = "Vijay, Malaika and Vinayak, Nandagopal Netrakanti and Nunna, Maanvi and Natarajan, Subramanyam", booktitle = "ICPR", title = "Real-Time Driver Drowsiness Detection using Facial Action Units", year = "2021" }
Yu et al., Driver Drowsiness Detection Using Condition-Adaptive Representation Learning Framework, Trans. ITS, 2018 | paper
-
Dataset(s): DDD
@article{2018_T-ITS_Yu, author = "Yu, Jongmin and Park, Sangwoo and Lee, Sangwook and Jeon, Moongu", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "11", pages = "4206--4218", title = "Driver drowsiness detection using condition-adaptive representation learning framework", volume = "20", year = "2018" }
Yu et al., Representation Learning, Scene Understanding, and Feature Fusion for Drowsiness Detection, ACCVW, 2017 | paper
-
Dataset(s): DDD
@inproceedings{2017_ACCVW_Yu, author = "Yu, Jongmin and Park, Sangwoo and Lee, Sangwook and Jeon, Moongu", booktitle = "ACCV", title = "Representation learning, scene understanding, and feature fusion for drowsiness detection", year = "2016" }
Shih et al., MSTN: Multistage Spatial-Temporal Network for Driver Drowsiness Detection, ACCVW, 2017 | paper
-
Dataset(s): DDD
@inproceedings{2017_ACCVW_Shih, author = "Shih, Tun-Huai and Hsu, Chiou-Ting", booktitle = "ACCV", title = "MSTN: Multistage spatial-temporal network for driver drowsiness detection", year = "2016" }
Huynh et al., Detection of Driver Drowsiness Using 3D Deep Neural Network and Semi-Supervised Gradient Boosting Machine, ACCVW, 2017 | paper
-
Dataset(s): DDD
@inproceedings{2017_ACCVW_Huynh, author = "Huynh, Xuan-Phung and Park, Sang-Min and Kim, Yong-Guk", booktitle = "ACCV", title = "Detection of driver drowsiness using 3D deep neural network and semi-supervised gradient boosting machine", year = "2016" }
Weng et al., Driver Drowsiness Detection via a Hierarchical Temporal Deep Belief Network, ACCV, 2017 | paper
-
Dataset(s): DDD
@inproceedings{2017_ACCV_Weng, author = "Weng, Ching-Hua and Lai, Ying-Hsiu and Lai, Shang-Hong", booktitle = "ACCV", title = "Driver drowsiness detection via a hierarchical temporal deep belief network", year = "2016" }
Park et al., Driver drowsiness detection system based on feature representation learning using various deep networks, ACCV, 2016 | paper
-
Dataset(s): DDD
@inproceedings{2016_ACCV_Park, author = "Park, Sanghyuk and Pan, Fei and Kang, Sunghun and Yoo, Chang D", booktitle = "ACCV", title = "Driver drowsiness detection system based on feature representation learning using various deep networks", year = "2016" }
Dashcam dataset | link
-
Description: Driving videos with steering information recorded on road
-
Data: scene video
Used in papers:
Mund et al., Visualizing the Learning Progress of Self-Driving Cars, ITSC, 2018 | paper
Dataset(s): Dashcam dataset
@inproceedings{2018_ITSC_Mund,
author = {Mund, Sandro and Frank, Rapha{\"e}l and Varisteas, Georgios and State, Radu},
booktitle = "ITSC",
title = "{Visualizing the Learning Progress of Self-Driving Cars}",
year = "2018"
}
Mund et al., Visualizing the Learning Progress of Self-Driving Cars, ITSC, 2018 | paper
-
Dataset(s): Dashcam dataset
@inproceedings{2018_ITSC_Mund, author = {Mund, Sandro and Frank, Rapha{\"e}l and Varisteas, Georgios and State, Radu}, booktitle = "ITSC", title = "{Visualizing the Learning Progress of Self-Driving Cars}", year = "2018" }
DriveAHead | paper | link
-
Description: Videos of drivers with frame-level head pose annotations obtained from a motion-capture system
-
Data: driver video
-
Annotations: occlusion, head pose, depth
@inproceedings{2017_CVPRW_Schwarz, author = "Schwarz, Anke and Haurilet, Monica and Martinez, Manuel and Stiefelhagen, Rainer", booktitle = "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops", pages = "1--10", title = "Driveahead-a large-scale driver head pose dataset", year = "2017" }
Brain4Cars | paper | link
-
Description: Synchronized videos from scene and driver-facing cameras of drivers performing various maneuvers in traffic
-
Data: driver video, scene video, vehicle data
-
Annotations: action labels
@inproceedings{2015_ICCV_Jain, author = "Jain, Ashesh and Koppula, Hema S and Raghavan, Bharad and Soh, Shane and Saxena, Ashutosh", booktitle = "ICCV", title = "Car that knows before you do: Anticipating maneuvers via learning temporal driving models", year = "2015" }
Used in papers:
Guo et al., Temporal Information Fusion Network for Driving Behavior Prediction, Trans. ITS, 2023 | paper
Dataset(s): Brain4Cars, private
@article{2023_T-ITS_Guo,
author = "Guo, Chenghao and Liu, Haizhuang and Chen, Jiansheng and Ma, Huimin",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "Temporal Information Fusion Network for Driving Behavior Prediction",
year = "2023"
}
Guo et al., Temporal Information Fusion Network for Driving Behavior Prediction, Trans. ITS, 2023 | paper
-
Dataset(s): Brain4Cars, private
@article{2023_T-ITS_Guo, author = "Guo, Chenghao and Liu, Haizhuang and Chen, Jiansheng and Ma, Huimin", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Temporal Information Fusion Network for Driving Behavior Prediction", year = "2023" }
Jain et al., Recurrent Neural Networks for Driver Activity Anticipation via Sensory-Fusion Architecture, ICRA, 2016 | paper | code
-
Dataset(s): Brain4Cars
@inproceedings{2016_ICRA_Jain, author = "Jain, Ashesh and Singh, Avi and Koppula, Hema S and Soh, Shane and Saxena, Ashutosh", booktitle = "ICRA", title = "Recurrent neural networks for driver activity anticipation via sensory-fusion architecture", year = "2016" }
Jain et al., Car that Knows Before You Do: Anticipating Maneuvers via Learning Temporal Driving Models, ICCV, 2015 | paper
-
Dataset(s): Brain4Cars
@inproceedings{2015_ICCV_Jain, author = "Jain, Ashesh and Koppula, Hema S and Raghavan, Bharad and Soh, Shane and Saxena, Ashutosh", booktitle = "ICCV", title = "Car that knows before you do: Anticipating maneuvers via learning temporal driving models", year = "2015" }
DAD | paper | link
-
Description: Videos of accidents recorded with dashboard cameras sourced from video hosting sites with annotations for accidents and road users involved in them
-
Data: scene video
-
Annotations: bounding boxes, accident category labels
@inproceedings{2016_ACCV_Chan, author = "Chan, Fu-Hsiang and Chen, Yu-Ting and Xiang, Yu and Sun, Min", booktitle = "ACCV", title = "Anticipating accidents in dashcam videos", year = "2016" }
Used in papers:
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen,
author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior",
year = "2024"
}
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen, author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior", year = "2024" }
Kopuklu et al., Driver Anomaly Detection: A Dataset and Contrastive Learning Approach, WACV, 2021 | paper
-
Dataset(s): DAD
@inproceedings{2021_WACV_Kopuklu, author = "Kopuklu, Okan and Zheng, Jiapeng and Xu, Hang and Rigoll, Gerhard", booktitle = "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision", pages = "91--100", title = "Driver anomaly detection: A dataset and contrastive learning approach", year = "2021" }
DROZY | paper | link
-
Description: Videos and physiological data from subjects in different drowsiness states after prolonged waking
-
Data: driver video, physiological signal
-
Annotations: drowsiness labels
@inproceedings{2016_WACV_Massoz, author = "Massoz, Quentin and Langohr, Thomas and Fran{\c{c}}ois, Cl{\'e}mentine and Verly, Jacques G", booktitle = "WACV", title = "The ULg multimodality drowsiness database (called DROZY) and examples of use", year = "2016" }
SFDDD | link
-
Full name: State Farm Distracted Driver Detection
-
Description: Videos of drivers performing secondary tasks
-
Data: driver video
-
Annotations: action labels
Used in papers:
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen,
author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong",
journal = "IEEE Transactions on Intelligent Transportation Systems",
publisher = "IEEE",
title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior",
year = "2024"
}
Chen et al., A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Chen, author = "Chen, Jiansong and Zhang, Qixiang and Chen, Jinxin and Wang, Jinxiang and Fang, Zhenwu and Liu, Yahui and Yin, Guodong", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "A Driving Risk Assessment Framework Considering Driver’s Fatigue State and Distraction Behavior", year = "2024" }
Li et al., Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Li_2, author = "Li, Guofa and Wang, Guanglei and Guo, Zizheng and Liu, Qing and Luo, Xiyuan and Yuan, Bangwei and Li, Mingrui and Yang, Lu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification", year = "2024" }
Li et al., A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer, Trans. ITS, 2024 | paper | code
-
Dataset(s): SFDDD, 100-Driver
@article{2024_T-ITS_Li_1, author = "Li, Zhao and Zhao, Xia and Wu, Fuwei and Chen, Dan and Wang, Chang", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "A Lightweight and Efficient Distracted Driver Detection Model Fusing Convolutional Neural Network and Vision Transformer", year = "2024" }
Li et al., Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification, Trans. ITS, 2024 | paper
@article{2024_T-ITS_Li, author = "Li, Guofa and Wang, Guanglei and Guo, Zizheng and Liu, Qing and Luo, Xiyuan and Yuan, Bangwei and Li, Mingrui and Yang, Lu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Domain Adaptive Driver Distraction Detection Based on Partial Feature Alignment and Confusion-Minimized Classification", year = "2024" }
Hasan et al., Vision-Language Models Can Identify Distracted Driver Behavior From Naturalistic Videos, Trans. ITS, 2024 | paper | code
@article{2024_T-ITS_Hasan, author = "Hasan, Md Zahid and Chen, Jiajing and Wang, Jiyang and Rahman, Mohammed Shaiqur and Joshi, Ameya and Velipasalar, Senem and Hegde, Chinmay and Sharma, Anuj and Sarkar, Soumik", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Vision-language models can identify distracted driver behavior from naturalistic videos", year = "2024" }
Chai et al., Rethinking the Evaluation of Driver Behavior Analysis Approaches, Trans. ITS, 2024 | paper
-
Dataset(s): SFDDD, AI CITY NDAR
@article{2024_T-ITS_Chai, author = "Chai, Weiheng and Wang, Jiyang and Chen, Jiajing and Velipasalar, Senem and Sharma, Anuj", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Rethinking the Evaluation of Driver Behavior Analysis Approaches", year = "2024" }
Ma et al., ViT-DD: Multi-Task Vision Transformer for Semi-Supervised Driver Distraction Detection, IV, 2024 | paper | code
@inproceedings{2024_IV_Ma, author = "Ma, Yunsheng and Wang, Ziran", booktitle = "2024 IEEE Intelligent Vehicles Symposium (IV)", organization = "IEEE", pages = "417--423", title = "Vit-dd: Multi-task vision transformer for semi-supervised driver distraction detection", year = "2024" }
Mittal et al., CAT-CapsNet: A Convolutional and Attention Based Capsule Network to Detect the Driver’s Distraction, Trans. ITS, 2023 | paper
@article{2023_T-ITS_Mittal, author = "Mittal, Himanshu and Verma, Bindu", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "CAT-CapsNet: A Convolutional and Attention Based Capsule Network to Detect the Driver’s Distraction", year = "2023" }
TETD | paper | link
-
Full name: Traffic Eye Tracking Dataset
-
Description: A set of 100 images of traffic scenes with corresponding eye-tracking data from 20 subjects
-
Data: eye-tracking, scene images
@article{2016_T-ITS_Deng, author = "Deng, Tao and Yang, Kaifu and Li, Yongjie and Yan, Hongmei", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "7", pages = "2051--2062", publisher = "IEEE", title = "Where does the driver look? Top-down-based saliency detection in a traffic driving environment", volume = "17", year = "2016" }
Used in papers:
Deng et al., Learning to Boost Bottom-Up Fixation Prediction in Driving Environments via Random Forest, Trans. ITS, 2018 | paper
Dataset(s): TETD
@article{2018_T-ITS_Deng,
author = "Deng, Tao and Yan, Hongmei and Li, Yong-Jie",
journal = "IEEE Transactions on Intelligent Transportation Systems",
number = "9",
pages = "3059--3067",
title = "Learning to boost bottom-up fixation prediction in driving environments via random forest",
volume = "19",
year = "2017"
}
Deng et al., Learning to Boost Bottom-Up Fixation Prediction in Driving Environments via Random Forest, Trans. ITS, 2018 | paper
-
Dataset(s): TETD
@article{2018_T-ITS_Deng, author = "Deng, Tao and Yan, Hongmei and Li, Yong-Jie", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "9", pages = "3059--3067", title = "Learning to boost bottom-up fixation prediction in driving environments via random forest", volume = "19", year = "2017" }
DIPLECS Surrey | paper | link
-
Description: Driving videos with steering information recorded in different cars and environments
-
Data: scene video, vehicle data
@article{2015_TranVehTech_Pugeault, author = "Pugeault, Nicolas and Bowden, Richard", journal = "IEEE Transactions on Vehicular Technology", number = "12", pages = "5424--5438", publisher = "IEEE", title = "How much of driving is preattentive?", volume = "64", year = "2015" }
YawDD | paper | link
-
Full name: Yawning Detection Dataset
-
Description: Recordings of human subjects in parked vehicles simulating normal driving, singing and taslking, and yawning
-
Data: driver video
-
Annotations: bounding boxes, action labels
@inproceedings{2014_ACM_Abtahi, author = "Abtahi, Shabnam and Omidyeganeh, Mona and Shirmohammadi, Shervin and Hariri, Behnoosh", booktitle = "Proceedings of the ACM Multimedia Systems Conference", title = "{YawDD: A yawning detection dataset}", year = "2014" }
Used in papers:
Chiou et al., Driver Monitoring Using Sparse Representation With Part-Based Temporal Face Descriptors, Trans. ITS, 2019 | paper
@article{2019_T-ITS_Chiou,
author = "Chiou, Chien-Yu and Wang, Wei-Cheng and Lu, Shueh-Chou and Huang, Chun-Rong and Chung, Pau-Choo and Lai, Yun-Yang",
journal = "IEEE Transactions on Intelligent Transportation Systems",
number = "1",
pages = "346--361",
publisher = "IEEE",
title = "Driver monitoring using sparse representation with part-based temporal face descriptors",
volume = "21",
year = "2019"
}
Chiou et al., Driver Monitoring Using Sparse Representation With Part-Based Temporal Face Descriptors, Trans. ITS, 2019 | paper
@article{2019_T-ITS_Chiou, author = "Chiou, Chien-Yu and Wang, Wei-Cheng and Lu, Shueh-Chou and Huang, Chun-Rong and Chung, Pau-Choo and Lai, Yun-Yang", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "1", pages = "346--361", publisher = "IEEE", title = "Driver monitoring using sparse representation with part-based temporal face descriptors", volume = "21", year = "2019" }
Yang et al., Video-Based Driver Drowsiness Detection With Optimised Utilization of Key Facial Features, Trans. ITS, 2024 | paper
@article{2023_T-ITS_Yang, author = "Yang, Haohan and Liu, Haochen and Hu, Zhongxu and Nguyen, Anh-Tu and Guerra, Thierry-Marie and Lv, Chen", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "Quantitative Identification of Driver Distraction: A Weakly Supervised Contrastive Learning Approach", year = "2023" }
Lu et al., JHPFA-Net: Joint Head Pose and Facial Action Network for Driver Yawning Detection Across Arbitrary Poses in Videos, Trans. ITS, 2023 | paper
-
Dataset(s): YawDD
@article{2023_T-ITS_Lu, author = "Lu, Yansha and Liu, Chunsheng and Chang, Faliang and Liu, Hui and Huan, Hengqiang", journal = "IEEE Transactions on Intelligent Transportation Systems", publisher = "IEEE", title = "JHPFA-Net: Joint Head Pose and Facial Action Network for Driver Yawning Detection Across Arbitrary Poses in Videos", year = "2023" }
3DDS | paper | link
-
Full name: 3D Driving School Dataset
-
Description: Videos and eye-tracking data of people playing 3D driving simulator game
-
Data: eye-tracking, scene video
@inproceedings{2011_BMVC_Borji, author = "Borji, Ali and Sihite, Dicky N and Itti, Laurent", booktitle = "BMVC", title = "Computational Modeling of Top-down Visual Attention in Interactive Environments.", year = "2011" }
Used in papers:
Tavakoli et al., Digging Deeper into Egocentric Gaze Prediction, WACV, 2019 | paper
Dataset(s): 3DDS
@inproceedings{2019_WACV_Tavakoli,
author = "Tavakoli, Hamed Rezazadegan and Rahtu, Esa and Kannala, Juho and Borji, Ali",
booktitle = "WACV",
title = "Digging deeper into egocentric gaze prediction",
year = "2019"
}
Tavakoli et al., Digging Deeper into Egocentric Gaze Prediction, WACV, 2019 | paper
-
Dataset(s): 3DDS
@inproceedings{2019_WACV_Tavakoli, author = "Tavakoli, Hamed Rezazadegan and Rahtu, Esa and Kannala, Juho and Borji, Ali", booktitle = "WACV", title = "Digging deeper into egocentric gaze prediction", year = "2019" }
Borji et al., What/Where to Look Next? Modeling Top-Down Visual Attention in Complex Interactive Environments, Transactions on Systems, Man, and Cybernetics Systems, 2014 | paper
-
Dataset(s): 3DDS
@article{2014_TransSysManCybernetics_Borji, author = "Borji, Ali and Sihite, Dicky N and Itti, Laurent", journal = "IEEE Transactions on Systems, Man, and Cybernetics: Systems", number = "5", pages = "523--538", title = "{What/where to look next? Modeling top-down visual attention in complex interactive environments}", volume = "44", year = "2013" }
Borji et al., Probabilistic Learning of Task-Specific Visual Attention, CVPR, 2012 | paper | code
-
Dataset(s): 3DDS
@inproceedings{2012_CVPR_Borji, author = "Borji, Ali and Sihite, Dicky N and Itti, Laurent", booktitle = "CVPR", title = "Probabilistic learning of task-specific visual attention", year = "2012" }
Borji et al., Computational Modeling of Top-down Visual Attention in Interactive Environments, BMVC, 2011 | paper
-
Dataset(s): 3DDS
@inproceedings{2011_BMVC_Borji, author = "Borji, Ali and Sihite, Dicky N and Itti, Laurent", booktitle = "BMVC", title = "Computational Modeling of Top-down Visual Attention in Interactive Environments.", year = "2011" }
DIPLECS Sweden | paper | link
-
Description: Driving videos with steering information recorded in different cars and environments
-
Data: scene video, vehicle data
@inproceedings{2010_ACCV_Pugeault, author = "Pugeault, Nicolas and Bowden, Richard", booktitle = "ECCV", title = "Learning pre-attentive driving behaviour from holistic visual features", year = "2010" }
BU HeadTracking | paper | link
-
Full name: Boston University Head Tracking Dataset
-
Description: Videos and head tracking information for multiple human subjects recorded in diverse conditions
-
Data: driver video
-
Annotations: head pose
@article{2000_PAMI_LaCascia, author = "La Cascia, Marco and Sclaroff, Stan and Athitsos, Vassilis", journal = "IEEE Transactions on pattern analysis and machine intelligence", number = "4", pages = "322--336", publisher = "IEEE", title = "Fast, reliable head tracking under varying illumination: An approach based on registration of texture-mapped 3D models", volume = "22", year = "2000" }
Used in papers:
Mbouna et al., Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring, Trans. ITS, 2013 | paper
Dataset(s): BU HeadTracking, private
@article{2013_T-ITS_Mbouna,
author = "Mbouna, Ralph Oyini and Kong, Seong G and Chun, Myung-Geun",
journal = "IEEE Transactions on Intelligent Transportation Systems",
number = "3",
pages = "1462--1469",
title = "Visual analysis of eye state and head pose for driver alertness monitoring",
volume = "14",
year = "2013"
}
Mbouna et al., Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring, Trans. ITS, 2013 | paper
-
Dataset(s): BU HeadTracking, private
@article{2013_T-ITS_Mbouna, author = "Mbouna, Ralph Oyini and Kong, Seong G and Chun, Myung-Geun", journal = "IEEE Transactions on Intelligent Transportation Systems", number = "3", pages = "1462--1469", title = "Visual analysis of eye state and head pose for driver alertness monitoring", volume = "14", year = "2013" }