Computer Vision & Tracking Engineer
Role Overview:
We are looking for a Computer Vision & Tracking Engineer to help build the perception system at the heart of our stationary, ground-based counter-drone defense platform. You will develop and deploy models that detect, classify, and track aerial threats in real-time using visual and multi-modal sensor data. Operating at the intersection of deep learning, computer vision, and real-time robotics, you’ll help us create a robust, scalable perception stack that performs reliably in the field—even under adversarial conditions.
As part of our fast-moving machine learning team, you’ll work end-to-end: from dataset curation and model design to deployment on edge platforms with tight performance constraints. If you’re excited about building technology that keeps people and infrastructure safe, this role offers an opportunity to work on real-world systems with mission-critical impact.
Key Responsibilities
- Design and develop perception models for drone detection, classification, and multi-object tracking using visual and/or multi-modal sensor inputs.
- Own the full ML lifecycle, including data preprocessing, labeling pipelines, model training, performance evaluation, and continuous improvement.
- Deploy models to edge platforms, optimizing for real-time inference using tools such as TensorRT, ONNX, or NVIDIA Jetson.
- Develop tracking algorithms, including object association, motion prediction, and trajectory smoothing across multiple time steps.
- Integrate models into real-time robotics systems, working closely with embedded, hardware, and robotics teams to ensure deterministic performance and minimal latency.
- Test and evaluate models in simulation and field environments, iterating to improve robustness under variable lighting, weather, and cluttered backgrounds.
- Prototype and research new approaches in vision-based tracking, temporal modeling, and sensor fusion to stay ahead of adversarial drone behaviors.
Required Qualifications
- Strong background in computer vision and deep learning, with specific experience in object detection, multi-object tracking, or temporal modeling.
- Proficiency in Python, with experience using PyTorch or TensorFlow for model development.
- Experience with tracking libraries such as Deep SORT, ByteTrack, or similar.
- Familiarity with OpenCV and image processing pipelines.
- Experience deploying models on embedded or edge hardware (e.g., Jetson Nano/Xavier) with optimization tools like TensorRT or ONNX.
- Understanding of tracking-by-detection pipelines, data association techniques, and Kalman or particle filters.
- Exposure to aerial or drone-related datasets or image domains.
- BS/MS/PhD in Computer Science, Electrical Engineering, Robotics, or a related field.
Preferred Qualifications
- Experience building ML pipelines that include multi-sensor fusion (e.g., combining camera, radar, RF, or thermal inputs).
- Familiarity with real-time systems, including latency profiling and performance debugging on embedded compute platforms.
- Previous work in defense, aerospace, or other safety-critical domains.
- Knowledge of synthetic data generation, labeling strategies, or active learning approaches for perception systems.
- Ability to write production-quality C++ or collaborate with embedded engineers on model deployment.
Who You Are (Startup Mindset)
- Autonomous and accountable: You take ownership of your work and see projects through from ideation to deployment.
- Adaptable: You thrive in fast-paced environments and embrace uncertainty as part of innovation.
- Mission-driven: You’re excited to apply your skills to a real-world problem with national security and public safety implications.
- Collaborative: You work well across disciplines, including robotics, systems, and embedded teams.