3D Object Detection and Tracking
Course project of AER 1515 - Perception for Robotics
Vision sensor data (RGB and Depth) collected from a semi-humanoid robot ‘Pepper’ provided by IATSL laboratory, are used to perform 3D human detection and tracking within a household setup enabling better assistance to old or sick-adults in home-care.
Implemented YOLOv3 for 2D detection and used the depth map to cluster the 3D region of the patient, which is then passed to an
Extended Kalman Filterin ROS environment to track the patient.
Clone the workspace provided on Github and build it.
Reproduce the results using launch command,
$ roslaunch det_and_tracking
If it throws an error, create a new workspace following ros tutorials and copy
/src's contents to the New workspace’s
srcand re-build it.
- Explainable AI for Visual Defect Inspection
- Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks
- Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring
- Semantic Input Sampling for Explanation (SISE) - A Technical Description
- Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation