Autonomous Driving Vehicle Perception Engineer
Not sure if you're a good fit?
Upload your resume and TixelJobs AI will compare it against Autonomous Driving Vehicle Perception Engineer at Quest Global. Get a match score, missing keywords, and improvement tips before you apply.
Free preview · Your resume stays private
About the Role
Quest Global delivers world-class end-to-end engineering solutions by leveraging our deep industry knowledge and digital expertise. By bringing together technologies and industries, alongside the contributions of diverse individuals and their areas of expertise, we are able to solve problems better, faster. This multi-dimensional approach enables us to solve the most critical and large-scale challenges across the aerospace & defense, automotive, energy, hi-tech, healthcare, medical devices, rail and semiconductor industries.
We are looking for humble geniuses, who believe that engineering has the potential to make the impossible possible; innovators, who are not only inspired by technology and innovation, but also perpetually driven to design, develop, and test as a trusted partner for Fortune 500 customers. As a team of remarkably diverse engineers, we recognize that what we are really engineering is a brighter future for us all. If you want to contribute to meaningful work and be part of an organization that truly believes when you win, we all win, and when you fail, we all learn, then we’re eager to hear from you. The achievers and courageous challenge-crushers we seek, have the following characteristics and skills
What You will Do:
- Design and implement advanced perception algorithms for autonomous vehicles using LiDAR, cameras, radar, and GNSS.
- Develop and optimize sensor fusion techniques to combine data from multiple sensors, improving the accuracy and reliability of perception systems.
- Create algorithms for object detection, tracking, semantic segmentation, and classification from 3D point clouds (LiDAR) and camera data.
- Work on Simultaneous Localization and Mapping (SLAM) algorithms, including Graph SLAM, LIO-SAM, and visual-inertial SLAM.
- Develop sensor calibration techniques (intrinsic and extrinsic) and coordinate transformations between sensors.
- Participate in real-time systems design and optimization to meet the high-performance requirements of autonomous driving.
- Work with ROS2 for integration and deployment of perception algorithms.
- Develop, test, and deploy machine learning models for perception tasks such as object detection and segmentation.
- Collaborate with cross-functional teams, including software engineers, data scientists, and hardware teams, to deliver end-to-end solutions.
- Stay up-to-date with industry trends and emerging technologies to innovate and improve perception systems.
What You Will Bring:
- Minimum 3+ years of experience in sensor calibration, multi-sensor fusion, or related domains.
- Strong foundation in linear algebra, 3D geometry, coordinate frames, quaternions, probability, Bayesian filtering, and data association.
- Hands-on experience with intrinsic and extrinsic calibration of LiDAR, cameras, and radar, including geometric calibration, coordinate transforms, and sensor synchronization.
- Proven experience with perception algorithms for autonomous systems, particularly in the areas of LiDAR, camera, radar, GNSS, or other sensor modalities.
- Deep understanding of LiDAR technology, point cloud data structures, and processing techniques; experience with PCL or Open3D.
- Proficiency in sensor fusion for combining data from LiDAR, camera, radar, and GNSS, including handling time synchronization and motion distortion.
- Solid background in computer vision techniques; experience with OpenCV and object detection models such as YOLO, Faster R-CNN, or SSD.
- Experience with deep learning frameworks (TensorFlow or PyTorch) for object detection and segmentation tasks.
- Hands-on experience with multi-object tracking algorithms such as SORT, DeepSORT, Kalman Filters, UKF, IMM, or JPDA.
- Strong programming skills in C++ and Python; familiarity with geometric optimization libraries.
- Familiarity with ROS2 for perception-based autonomous systems development.
- Experience with parallel computing for real-time performance optimization (e.g., CUDA, OpenCL).
Pay Range: $80,000-$100,000 a year
Compensation decisions are made based on factors including experience, skills, education, and other job-related factors, in accordance with our internal pay structure. We also offer a comprehensive benefits package, including health insurance, paid time off, and retirement plan.
Ready to apply?
This job is active. Apply now to get in early.
Similar Jobs
Computer Vision Engineer
Vardera
Machine Learning Engineer – Computer Vision & Multimodal AI
MYL Instruments
AI Engineer (Perception & Edge AI Systems Engineer) (gn) @ Autonomous Navigation Venture, Belgium
Atlantic
Computer Vision Engineer (AI/ML Agri-Domain)
AgroNest Ventures Private Limited