Mahesh Sudhakar

Mahesh Sudhakar

Computer Vision Research Engineer

Musashi Americas


  • Computer Vision Research Engineer at Musashi Auto Parts Canada (Musashi AI).
  • Former Post-graduate Research Associate at Bell Multimedia Laboratory where I worked in eXplainable Artificial Intelligence (XAI) along with LG AI Research.
  • Master’s graduate in Computer Vision and Robotics from the Department of Electrical and Computer Engineering (ECE) at the University of Toronto (Class of 2020).
  • Detail-oriented Machine Learning (AI) Engineer with 3+ years of full-time work experience in backend software development, database management (SQL), and application support.
  • Our paper on the novel XAI algorithm Semantic Input Sampling for Explanation was ACCEPTED and PRESENTED at AAAI-21 conference.
  • Two new papers - Ada-SISE and Integrated Grad-CAM - ACCEPTED and PRESENTED at the IEEE ICASSP-21 conference.


  • Computer Vision
  • Explainable AI (XAI)
  • Robotics and Control
  • Machine Learning


  • M.Eng in Electrical and Computer Engineering, 2020

    University of Toronto

  • B.E in Electrical and Electronics Engineering, 2016

    Anna University



AI Engineer

Musashi Auto Parts Canada

Jan 2021 – Present Waterloo
  • Building Intelligent edge devices for Industrial Automation.
  • Owner of the full lifecycle of ‘Real Time Instance Segmentation’ at Musashi AI North America.
  • Automated the cumbersome data annotation process.
  • Working together with Machine Build team on a daily basis, to optimize camera and sensor positions on FANUC Industrial series robotic manipulator arms.

Research Assistant

University of Toronto

May 2020 – Feb 2021 Toronto
  • Post-graduate researcher at Bell Multimedia Laboratory under the supervision of Prof. Konstantinos (Kostas) N. Plataniotis in collaboration with LG AI Research, working towards decoding deep residual ‘black-box’ Machine Learning classification and detection models.
  • Proposed and developed a novel eXplainable AI algorithm, Semantic Input Sampling for Explanation (SISE), that was integrated along with LG’s existing industrial code stack for fully automated supervision.
  • Read LG AI Research’s official blog here (Korean) OR here (English).

Autonomous Systems Simulation Engineer


Jan 2020 – Jun 2020 Toronto
  • Worked in the Simulation and Experimentation team of aUToronto (University of Toronto’s self-driving car team) for SAE AutoDrive challenge.
  • Tested the existing pedestrian detection and tracking stack with virtual data.
  • Designed and built various driving scenarios for the ego vehicle in simulation environment.

Systems Engineer

Infosys Limited

May 2016 – Jul 2018 Bangalore
  • Backend developer for the UK region of AIMIA (formerly Groupe Aeroplan) - a data driven loyalty analytics company based out of Montreal.
  • Developed and delivered multiple MySQL stored procedures in the relational DB management system server.
  • Managed several diverse sensitive banking data regarding retail and logistics, and worked on automating (RPA) numerous regular IT processes through the ActiveBatch tool.
  • Addressed many client specific functional requirements, and have quickly fixed various critical real-time issues with minimal supervision.


Visit my Github profile for complete list


3D Object Detection and Tracking

Vision sensor data (RGB and Depth) collected from a semi-humanoid robot ‘Pepper’ provided by IATSL laboratory, are used to perform 3D human detection and tracking within a household setup enabling better assistance to old or sick-adults in home-care.

Autonomous System Simulation

Worked in the Simulation and Experimentation team of aUToronto (U of T’s self driving car team). Zeus - Student built self driving car is the WINNER of all 4 years of SAE/GM AutoDrive challenge).

Feature Detection and Matching

This project implements feature point detection and its matching between stereo pair images from KITTI dataset. For a given input RGB image from left camera, the features which are described to be an image region that is salient, local, repeatable, compact and efficient, are identified and studied by visual inspection for unreliability on matching.

IDC type Breast Cancer Classification

Breast cancer classification on Keras (based on the implementaion of CancerNet algorithm by Adrian Rosebrock [1]). Breast cancer is the most common form of cancer in women, and Invasive Ductal Carcinoma (IDC) is the most common form of breast cancer.

Lidar and IMU calibration

Worked on a 6 DOF non-linear optimization problem to determine the pose vector relating an Inertial Measurement Unit (IMU) to a LiDAR sensor based on the data collected on Zeus self-driving car during its operation.

Object Detection and Instance Segmentation

This assignment focuses on 2D object detection and instance segmentation using the depth data. Motivated by the KITTI vision challenge for object detection and tracking, disparity map between the corresponding images from left (p2) and right (p3) camera are provided.

Explainable AI for Visual Defect Inspection

NEU_XAI Developed and studied XAI algorithms that generates saliency maps according to the importance of each corresponding pixels of the input test image towards the Machine Learning model’s predictive accuracy, with the aim of decoding complex black-box models.

Recent Posts

Semantic Input Sampling for Explanation (SISE) - A Technical Description

TL;DR * We propose a state-of-the-art post-hoc CNN specific Visual XAI algorithm - SISE. * Input : A test image; The trained model * Output : A visual 2D heatmap * Properties : Noise-free, High resolution, Class discriminative and Correlates to model's prediction.


Ada-SISE: Adaptive Semantic Input Sampling for Efficient Explanation of Convolutional Neural Networks

Explainable AI (XAI) is an active research area to interpret a neural network’s decision by ensuring transparency and trust in …

Integrated Grad-CAM: Sensitivity-Aware Visual Explanation of Deep Convolutional Networks via Integrated Gradient-Based Scoring

Visualizing the features captured by Convolutional Neural Networks (CNNs) is one of the conventional approaches to interpret the …

Explaining Convolutional Neural Networks through Attribution-Based Input Sampling and Block-Wise Feature Aggregation

As an emerging field in Machine Learning, Explainable AI (XAI) has been offering remarkable performance in interpreting the decisions …



Student Mentor

  • Mentored the incoming international graduate students during their initial transition under the iConnect program of the Center for International Experience (CIE), and facilitated intercultural learning by fostering inclusivity and equity.

VP Communications

  • Managed the official social network pages of the ECE Graduate Students Society (ECEGSS) of the University of Toronto, by engaging group members with updates/posts of events regarding academic, social, and professional development activities.

Event Staff

  • Helped at the registration desk, and provided hospitality for the event’s speaker and attendees, at Johnson & Johnson Innovation JLABS - MaRS Centre, for various technical talk events.


  • Academic coordinator of EEE department during undergrad, acting as a liaison between faculty and students, and leading a team of 50 members to the coveted inter-departmental championship shield.

  • Runners-up in Texas Instruments India Analog Maker Competition conducted by TI University Program in association with Starcom Information Technology Limited, held for a period of one month.


  • Campus Ambassador at the Academic and Campus Events (ACE) and a part-time usher at the convocation hall.

  • Trained to be a Disaster Recovery Representative (DRR) at the workplace.

  • Won various Robotic events at numerous inter and intra-college competitions during undergrad.

  • Was an active member of Serve 360, an NGO dedicated to serve underprivileged kids around Chennai city area, India during undergrad.

View my Cocurricular record here


Drop me an email to hear back from me!