sustainabletalent
AI Security Engineer
At a Glance
- Location
- Santa Clara, California, United States
- Experience
- 2+ years
- Posted
- 2026-03-18T10:46:00-04:00
Key Requirements
Required Skills
Domain Knowledge
- Legal
- Regulatory
Requirements
Bachelor’s or Master’s Degree in Computer Science or related field or equivalent experience.
2+ years of work experience as a Machine Learning Engineer or Deep Learning Scientist or a similar role, with a consistent record of successfully delivering ML solutions.
Strong programming skills in languages such as Python. Experience with frameworks like TensorFlow, PyTorch, or scikit-learn.
Proficiency in data manipulation, analysis, and visualization using tools like NumPy and pandas.
Deep understanding of machine learning algorithms, statistical models, and data structures.
Familiarity with software development practices and version control systems (e.g., Git).
Compensation & Benefits
$90/hr - $130/hr
based on factors like experience, education, location, etc. and provide full benefits, PTO, and amazing company culture!
As a Machine Learning Engineer, you'll work alongside NVIDIA’s research and engineering teams, focused on AI Safety for LLMs, including multi-lingual, multi-modal, and reasoning models. We value expertise in data science paired with a robust data engineering foundation. This role is directed at assessing, and improving the safety and inclusivity of our LLM models in a scalable fashion.
We seek someone proficient in programming and scripting for comprehensive data manipulation, analysis, and model fine-tuning.
We believe in proactive problem-solving, minimal supervision, and being exceptional teammates who collaborate, think, and learn as one unit. Let's make a difference together!
Responsibilities
Develop datasets and moderator models for evaluating LLM models and end-to-end systems for Content Safety, ML Fairness. These LLM models can be txt-to-txt or multimodal-to-txt.
Develop datasets for training LLM models with SFT and RL techniques, for Content Safety, ML Fairness, Security and more.
Research and implement cutting-edge techniques for bias detection and mitigation in LLMs and systems.
Define and track key metrics for responsible LLM behavior and usage.
Follow the best practices of automation, monitoring, scale, safety.
Contribute to our repositories and develop safety tools to help ML teams be more effective.