I am currently a Robotics Master's student at Worcester Polytechnic
Institute graduating in May 2024. My focus lies at the intersection of Perception, Deep learning and
embedded systems solving problems in engineering and research for mobile and manipulator systems. At WPI, I am currently working on Directed research, where I am advised by Prof. Berk Calli on a Amazon Grant project where I am
working on deciding best grasping strategy for the robot to choose skills like sliding, pushing, etc. to use for a given picking
instance. During the summer of 2023, I completed an internship at Amazon Robotics, where my work revolved around optimizing sorting procedures within warehouse facilities for enhancing productivity in the process.
Previously, I have interned at Human-Centered Robotics Lab at Indian Institute of Technology, Gandhinagar under the guidance of Dr. Vineet Vashista, where I worked on admittance control strategy and force estimation for quadcopter model. I was also part of Robotics undergraduate team Team Automatons, where I was Programming lead for season 2021-2022. During the four years in the robotics team, I handled the perception tasks, control and navigation of robotic systems for ABU-ROBOCON themes ( 2019 theme , 2020 theme , 2021 theme , 2022 theme ).
I am actively looking for Full-time opportunities in Robotics and Software industry starting Fall 2024. I believe that diversity can bring up new perspectives and will spark brilliant ideas. I look forward to learning with experts in academia and industry. Please feel free to reach out to me at ardangle@wpi.edu!
Previously, I have interned at Human-Centered Robotics Lab at Indian Institute of Technology, Gandhinagar under the guidance of Dr. Vineet Vashista, where I worked on admittance control strategy and force estimation for quadcopter model. I was also part of Robotics undergraduate team Team Automatons, where I was Programming lead for season 2021-2022. During the four years in the robotics team, I handled the perception tasks, control and navigation of robotic systems for ABU-ROBOCON themes ( 2019 theme , 2020 theme , 2021 theme , 2022 theme ).
I am actively looking for Full-time opportunities in Robotics and Software industry starting Fall 2024. I believe that diversity can bring up new perspectives and will spark brilliant ideas. I look forward to learning with experts in academia and industry. Please feel free to reach out to me at ardangle@wpi.edu!
Topics
of Interest
Object detection and segmentation, Multi-object tracking, Camera calibration, Multi-sensor Fusion, Neural architectures, State estimation, Perception algorithms, Image reconstruction, Robotic Manipulation.
Object detection and segmentation, Multi-object tracking, Camera calibration, Multi-sensor Fusion, Neural architectures, State estimation, Perception algorithms, Image reconstruction, Robotic Manipulation.
Working
Proficiency
Programming: C, C++, Python, HTML, CSS, JavaScript.
Software: MATLAB, Simulink, ROS, Arduino, VRep, OpenCV, PyTorch, Keras, SuperSet, Apache Spark, LATEX.
Hardware: Jetson Nano, STM32 Discovery board, Arduino, Raspberry Pi, Franka-Emika Panda arm.
Programming: C, C++, Python, HTML, CSS, JavaScript.
Software: MATLAB, Simulink, ROS, Arduino, VRep, OpenCV, PyTorch, Keras, SuperSet, Apache Spark, LATEX.
Hardware: Jetson Nano, STM32 Discovery board, Arduino, Raspberry Pi, Franka-Emika Panda arm.
Education
Master of Science in Robotics Engineering
Worcester Polytechnic Institute, Massachussets, USA(Aug 2022 - May 2023)
- GPA 4.0/4.0
Bachelor of Engineering in Computer Engineering
Savitribai Phule Pune University, Pune, India(Aug 2018 - May 2022)
- CGPA 9.64/10.0 (Equivalent to 4.0/4.0)
Experience
Software Development Research intern.
Westborough, Massachusetts (May 2023 - August 2023)
Westborough, Massachusetts (May 2023 - August 2023)
Keywords: Pytorch, Python, C++
During the internship, I developed perception and manipulation algorithms, focusing on optimizing sorting
procedures within Amazon’s warehouse facilities. In addition, I also established an automated annotation pipeline utilizing proprietary image matching algorithms, resulting in significant
time saving driving operational efficiency
Directed Research Supervised by
Dr. Berk Calli
Worcester, MA (June 2022 - Present)
Worcester, MA (June 2022 - Present)
Keywords: GGCNN, Manipulation, Realsense D35i, Antipodal
grasping
Dexterous Picking
I have been engaged in a project focused on overcoming the limitations of autonomous robotic systems in
contact-rich manipulation tasks. These tasks, such as picking objects from clutter or packing/unpacking items,
demand intricate hand dexterity and precise force modulation between fingers, objects, and the environment.
For instance, to pick an object from a cluttered environment, it might involve pushing aside other objects (decluttering).
When dealing with challenging objects, strategies like pushing against surfaces or coordinated finger motions could be
advantageous. In cases of tightly packed objects, fingers need to navigate narrow openings to establish effective gripping surfaces.
These capabilities come naturally to humans but translating them into robotic systems has mostly been limited to proof-of-concept studies.
The primary objectives of my involvement in this project include implementing algorithms capable of
identifying optimal dexterous manipulation strategies based on the specific target object and scene.
Furthermore, I am contributing to the development of manipulation algorithms that execute the selected strategies.
This project aims to bridge the gap between the innate capabilities of humans and the capabilities of autonomous
robots in handling complex manipulation scenarios.
Benchmarking for Robotic Manipulation The main motive of the work is systematic assessment of manipulation performance of visual-based grasping algorithms to draw meaningful comparisons. I work on implementing visual-based algorithms GGCNN, ResNet and GGCNN2 on simulation as well as real environment ( Franka Panda Emika arm), with specified benchmarking protocols, comparing the results and analyzing the performance for several different objects. Generative Grasping CNN is grasping algorithm which predicts quality and pose for antipodal grasps in an given input image. The benchmarking results for this algorithm was tested in different object datasets. Further direction of research is to compare various other visual grasping model on the same setup.
Benchmarking for Robotic Manipulation The main motive of the work is systematic assessment of manipulation performance of visual-based grasping algorithms to draw meaningful comparisons. I work on implementing visual-based algorithms GGCNN, ResNet and GGCNN2 on simulation as well as real environment ( Franka Panda Emika arm), with specified benchmarking protocols, comparing the results and analyzing the performance for several different objects. Generative Grasping CNN is grasping algorithm which predicts quality and pose for antipodal grasps in an given input image. The benchmarking results for this algorithm was tested in different object datasets. Further direction of research is to compare various other visual grasping model on the same setup.
Edge-Emerging Technology intern.
Bangalore, India (March 2022 - May 2022)
Bangalore, India (March 2022 - May 2022)
Keywords: KPI creation, Apache Superset, PySpark
Data Analytics and clustering: During my internship, I used Apache Superset to create data
visualization dashboards with client-specific features. I was also in charge of creating KPIs with
built-in features. The other dimension I worked on was big-data analysis for enterprise solutions
using Apache Spark and Scala. Additionally, I worked on developing custom GeoJSON files for specific
country regions.
Intern
Delhi, India (Oct 2020 - Jan 2021)
Delhi, India (Oct 2020 - Jan 2021)
Keywords: KPI creation, Apache Superset, PySpark
In the period of internship, I simulated drone navigation paths in ROS-Gazebo, customizing underlying PX4 architecture and executing robust solution for RFM to attain NPNT compliance. Achieved successful implementation of custom trajectories (offboard control), parsing permission artefact and validation monitoring geofence, and delivering custom messages through MAVlink.
[Code]
In the period of internship, I simulated drone navigation paths in ROS-Gazebo, customizing underlying PX4 architecture and executing robust solution for RFM to attain NPNT compliance. Achieved successful implementation of custom trajectories (offboard control), parsing permission artefact and validation monitoring geofence, and delivering custom messages through MAVlink.
Summer Research Intern supervised by: Dr. Vineet Vashishta
Gandhinagar, India (Jun 2021 - Aug 2021)
Gandhinagar, India (Jun 2021 - Aug 2021)
Admittance control for Quadcopters: During physical human–robot
interaction a robot measures external forces from the human and adequately responds to these.
Admittance control seeks to impose a desired dynamic behaviour on the robot subject to these external
contact forces by the users. The admittance control strategy has been researched deeply for robotic
arms and ground robots. However, unforeseen behaviours it is needed that the quadcopters safely and
appropriately handle physical interaction with their environment. Generally speaking, admittance
control is used whenever compliance with the environment is necessary.
In this Project, the external forces on the quadcopter are estimated from the position and orientation information coming from the vehicles sensors. The Admittance Controller respectively modifies the reference trajectory to accommodate this force. This trajectory is then tracked by an underlying Position and Attitude Controller [Paper]. During the period of my internship, I worked on simulating admittance control strategy and external force estimation for quadcopter using MATLAB and Simulink. I also worked on position and attitude control equations with state estimation using kalman filter for quadcopter in ROS-Gazebo.
[Report]
In this Project, the external forces on the quadcopter are estimated from the position and orientation information coming from the vehicles sensors. The Admittance Controller respectively modifies the reference trajectory to accommodate this force. This trajectory is then tracked by an underlying Position and Attitude Controller [Paper]. During the period of my internship, I worked on simulating admittance control strategy and external force estimation for quadcopter using MATLAB and Simulink. I also worked on position and attitude control equations with state estimation using kalman filter for quadcopter in ROS-Gazebo.
[Report]
Programming Lead
Pune, India (Aug 2018 - July 2022)
Pune, India (Aug 2018 - July 2022)
Part of a team responsible for control and navigation of robotic systems. During the for years in the
team, I worked on interfacing, testing and integrating different sensors for automatic as well as manual
control of robotic systems. For 2020 theme, I developed a custom object detection and tracking algorithm
for position estimation and trajectory prediction of a moving rugby ball. The approach of the algorithm
was to combine the accuracy of object detection provided by the custom trained YOLOv5 model and the
speed of the KCF tracker to perform a linear trajectory prediction of the ball. Kalman filter is used to
ensure the optimal estimation of the current position and increase the accuracy of prediction for the
future trajectory. Multi-threading is also executed to concurrently detect and track the ball in a
consecutive frames, producing a computationally efficient approach.
In 2021, I worked on a detection and tracking approach regarding position estimation of pots and angle of arrow on-ground. The pots were detected by HSV color thresholding, then classified on the basis of local positional parameters like distance of pots from the robot, relative position of pots with the robot. The Kalman filter specifically works on providing better depth estimates and thereby the position of pots even when the pot tables are overlapping. The approach of the detection-tracking algorithm for small objects like arrows is to combine the accuracy of object detection provided by the custom-trained YOLOv5 model and the speed of the KCF tracker to outperform the results. Multithreading is also used to concurrently detect and track the arrows in consecutive frames, producing a computationally efficient approach as compared to standalone detection with YOLOv5. Also developed an approach to effectively get depth information of an object using an Intel RealSense D435i depth camera.
For 2022, I worked on detecting a ball placed on a robot with YoloV5, creating an algorithm with multi-threaded detection and a deep learning-based Siam-Mask tracker to allow for manual intervention. Using a depth camera, I estimated the angle and distance between the target and the robot. Deployed the entire stack on an Nvidia JetsonNx that communicated with an Arduino to perform robotic movements.
ABU-ROBOCON 2019: [YouTube: Journey of Team Automatons 2019 ABU-ROBOCON.]
.
ABU-ROBOCON 2020: [YouTube: Journey of Team Automatons 2020 ABU-ROBOCON.]
.
ABU-ROBOCON 2021: [YouTube: Journey of Team Automatons 2021 ABU-ROBOCON.]
.
ABU-ROBOCON 2022: [YouTube: Journey of Team Automatons 2022 ABU-ROBOCON.]
.
In 2021, I worked on a detection and tracking approach regarding position estimation of pots and angle of arrow on-ground. The pots were detected by HSV color thresholding, then classified on the basis of local positional parameters like distance of pots from the robot, relative position of pots with the robot. The Kalman filter specifically works on providing better depth estimates and thereby the position of pots even when the pot tables are overlapping. The approach of the detection-tracking algorithm for small objects like arrows is to combine the accuracy of object detection provided by the custom-trained YOLOv5 model and the speed of the KCF tracker to outperform the results. Multithreading is also used to concurrently detect and track the arrows in consecutive frames, producing a computationally efficient approach as compared to standalone detection with YOLOv5. Also developed an approach to effectively get depth information of an object using an Intel RealSense D435i depth camera.
For 2022, I worked on detecting a ball placed on a robot with YoloV5, creating an algorithm with multi-threaded detection and a deep learning-based Siam-Mask tracker to allow for manual intervention. Using a depth camera, I estimated the angle and distance between the target and the robot. Deployed the entire stack on an Nvidia JetsonNx that communicated with an Arduino to perform robotic movements.
ABU-ROBOCON 2019: [YouTube: Journey of Team Automatons 2019 ABU-ROBOCON.]
ABU-ROBOCON 2020: [YouTube: Journey of Team Automatons 2020 ABU-ROBOCON.]
ABU-ROBOCON 2021: [YouTube: Journey of Team Automatons 2021 ABU-ROBOCON.]
ABU-ROBOCON 2022: [YouTube: Journey of Team Automatons 2022 ABU-ROBOCON.]
Projects
Image colourization of thermal images for autonomous vehicles with pedestrian detection.
Colorization process of thermal pictures to RGB images is a challenging task. Thermal cameras can identify objects in special conditions like darkness, fog, snow, etc. that human eye cannot. With human eyes, however, the thermal image is difficult to understand. Improving thermal image colorization is a critical effort in these fields. The project proposes a method for translating thermal infrared to visual color images with augmentation using a specially trained Convolutional Neural Network (CNN) architecture. Pre-trained YOLOv5 architecture is also used for pedestrian detection. The proposed models are trained on CAMEL thermal and corresponding color images pair dataset. The results are examined using quantitative evaluation by RMSE, and qualitatively evaluated by comparing visually realistic output images with ground truth. The bounding boxes for detected pedestrians are drawn to show results.
Music Transcription Analysis
This project mainly examines transform algorithms, to automatize music transcription. We transform the audio files into spectrograms, to analyze features from it. The results verify which algorithm is better and can be further used for transcription by analyzing graphs for different music samples. Working on extending it by training the model and developing it into a working prototype transcripting piano notes.
Automatic Panorama Stitching from Multiple images.
The basic task is to stitch multiple images to form a Panorama using warping and Homography technique. The following steps are applied on every individual two images and then combined for the multiple images in the set. This approach has six main basic steps: 1. Corner detection.
2. Adaptive Non-Maximal Suppression.
3. Extract the features i.e. Create a feature descriptor.
4. Match the features from both the images.
5. RANSAC for removing the outliers and getting the homography matrix.
6. Warping, stitching and blending the generated images.
Phase 2: Deep learning approach In this section, the same task is to be implemented using two Deep Learning models, Supervised and Unsupervised. The dataset used for both the models is a small subset of the MSCOCO dataset containing 5000 images for training and 1000 images for validation.
PbLite edge detection
In this project, a simplified version of Pb probability of boundary was to be implemented. This algorithm finds boundaries by examining brightness, color, and texture information in multiple scales.The implementation of this algorithm was intended to outperform the standard Sobel and Canny filter edge detection algorithms. The algorithm has five main basic steps:
1. Filters- Three different filters: Oriented Derivative of gaussian Filter, Leung-Malik Filter and Gabor filter are implemented.
2. Create Half-Disc masks.
3. Generate Texton Map and Texton Gradient.
4. Generate Brightness Map and Brightness gradient.
5. Generate Color Map and Color Gradient.
6. Combine information from the features with a Sobel and Canny methods (Average).
Face Swapping.
This approach is divided into the following 4 steps: 1. Facial Landmark detection of both images using dlib library.
2. Face Warping using Delaunay Triangulation or Thin Plate Spline (TPS). We will see both these approaches in detail later.
3. Creating a mask of the destination face.
4. Replacing the destination face with the source face.
Structure from Motion
In this project, with a given set of 5 images from a monocular camera and their feature point correspondences, we reconstructed a 3D scene while also obtaining camera poses with respect to the scene. This approach has six main basic steps: 1. Estimate Fundamental Matrix from the given SIFT feature correspondences.
2. Estimate the Essential matrix using the fundamental matrix and Estimate camera poses.
3. Refining camera pose using Cheirality condition and Linear Triangulation.
4. Calculating the Visibility Matrix
5. Performing Bundle Adjustment.
Visual Inertial Odomety.
The task of this project is to perform visual-inertial odometry, which is a fusion of camera and imu data to bring about robust odometry for a robot. The complementary charac- teristics of a camera and an imu are used for the same. We use the Multi-State Constraint Kalman Filter (S-MSCKF) approach which is a state-of-the-art method to perform VIO. It used a stereo camera and an IMU. It is the favored method for MAV platforms since it can function well in GPS-denied conditions and requires a much smaller and lighter sensor package than lidar-based techniques.
Publications
Enhanced Colorization of Thermal Images for Pedestrian Detection using Deep Convolutional Neural Networks.
Rugby ball detection, tracking and future trajectory prediction algorithm
Qualitative Colorization of Thermal Infrared Images using custom Convolutional Neural Networks
Optimized detection, classification and tracking with YOLOv5, HSV color thresholding and KCF tracking.
(* denotes equal contribution)
Other Activities
Attended Summer school on Artificial Intelligence 2021 organized by IIITH. Interacted
with fellow students and researchers along with participation in lectures from
renowned faculty in AI and CV.
[Certificate]
AI-ML Team Member of Google DSC PCCOE.
[Activities]
Successfully conducted and managed Introductory robotics workshop for 250+ first year
undergraduate students.
[Photos]
Volunteered as a teaching mentor for various high school students under the initiative of
‘Hour of Code’ by ACM India.
[Website]