The following list is a culmination of all the projects that I've completed so far. They range from Computer Vision algorithms to microcontroller based prototypes infused with sensor fusion for various applications.  These projects have helped me shape my career and help me find my passion for robotics.   

It is worth noting that there has been a noticeable evolution in the quality and nature of my work, with the most recent projects taking precedence in the blog. 


A sophisticated aim assist tool was developed specifically for the Call of Duty mobile game. Extensive gameplay footage was recorded and categorized into 'player' and 'friendly' labels. This data was utilized to train a YoloV8 model, enabling it to accurately detect both enemies and teammates. Once enemies were identified, a bounding box was drawn around their heads to optimize damage output.

 To showcase the effectiveness of this tool, the game was emulated on a computer, and a program was created to automatically target enemy heads every 5 seconds. Users were still required to maneuver their character during gameplay. This demonstration provided valuable insights for future developments and potential enhancements. The project was named Dead Eye AI, more of this can be found in this report

SKILLS : Python | YoloV8 | Data processing | PyTorch |  OpenCV


The objective of the project was to develop a simple script capable of projecting a 2D image onto a known 3D surface in real time. To accomplish this, I went through all of the necessary stages, beginning with camera calibration. Initially, a 9x6 chessboard was used to calibrate the camera, and the camera coordinates were then transformed into image coordinates using a series of affine transformations manually written in C++.  

To detect the corners of the chessboard, edge detection was employed, and the extrinsic parameters were continuously fed into the program to identify each corner. To project image coordinates onto the real world, a conversion from image coordinates to real world coordinates was performed using the Perspective N point pose computation. A simple bounding area was then drawn around all four corners, and subsequently, a basic letter "M" was projected onto the chessboard. Following that, a random binary image was captured and turned into a point cloud. The point cloud was reprojected onto the real world after being shrunk to fit within the chessboard. A detailed report can be found here. 

     SKILLS : C++ | Camera Calibration | Feature extraction | AR | Edge Detection | Coordinate Geometry     

Letter "M" re-projected 

Input binary image 

 Re-projection of the input 


This project aimed to explore various techniques for edge and feature detection. I developed a robust algorithm to analyze a set of overlapping images, with the goal of creating a panorama if arranged correctly. The algorithm began by conducting basic image pre-processing to enhance image quality. Following this, corner features were extracted using the Harris feature detector. 

Once the features were identified, they were compared with features from other images to determine the percentage of similarity. Based on the matching features, pairs of images were established, and a series of transformations were applied to align corresponding pixels. Once the appropriate transformation was found, the images were mapped and transformed to generate a panorama. To assess the algorithm's effectiveness, different images with varying degrees of overlap were utilized to create panoramas, testing its robustness. Click here to read more. 

                                    SKILLS : MATLAB | Image Processing | Feature extraction                                                  

Final Panorama 

Input images Vs Features deteted