About AutoDrive

I'm a member of the perception team in Buckeye AutoDrive, where we are excited to be part of the AutoDrive Challenge™ II Competition. At Buckeye AutoDrive, our mission is to develop and demonstrate an autonomous vehicle (AV) that can navigate urban driving courses using SAE J3016™ Standard Level 4 automation.

Our team consists of more than 50 student members, including undergraduates, master’s, and Ph.D. students, all working on various crucial areas. These include sensors, perception, planning and controls, simulation, vehicle safety and mobility, and hardware design. As a member of the perception team, I focus on developing the computer vision systems for our autonomous vehicle. Our responsibilities range from signal processing, general object classification/detection to dynamic tracking. We develop the algorithms necessary for the car to perceive and react to its environment, ensuring safe and effective autonomous navigation.

Welcome to checkout our Concept Design Event presentation here!

My Involvement in AutoDrive

As a member of the perception team, my involvement focuses on several key areas:
1. Lane Detection: I work on developing algorithms to accurately identify lane markings on the road. This involves using a combination of computer vision techniques and neural networks to ensure our vehicle can stay within its lane and make informed decisions about lane changes and turns. (Presentation Available)
2. HD Map Information Extraction: I contribute to extracting and integrating high-definition map data, which provides our AV with detailed information about the road network, including lane configurations, traffic signs, and other critical features.
3. Image Color Correction: Ensuring that the images captured by our vehicle's cameras are accurately represented is crucial for effective 2d detection. I develop techniques for correcting color exposure and white balance caused by varying lighting conditions, which improves the reliability of our object detection algorithms.
4. Traffic Light Classification: I work on classifying traffic lights, enabling our AV to recognize and respond appropriately to different traffic signals. This involves training models to distinguish between red, yellow, and green lights, as well as detecting their states in various environmental conditions.
5. Distance Extraction from LiDAR: I help in processing data from LiDAR sensors to accurately measure distances to objects around the vehicle. This information is vital for tasks such as obstacle detection, collision avoidance, and safe navigation through complex environments.