xai

Network Engineer - ML Infrastructure (High-Speed Interconnects)

Apply Now

At a Glance

Location
United States
Experience
8+ years
Compensation
AI models. Annual Base Salary $180,000 - $440,000 USD Benefits Base salary is j
Posted
2026-02-19T14:59:23-05:00

Key Requirements

Domain Knowledge

  • Engineering

Requirements

At least 8+ years of hands-on experience in designing, deploying and operating high-speed copper and optical interconnects, preferably in a module design role or in a hyperscale datacenter environment.

Master's or PhD degree in Electrical Engineering, Photonics or Physics.

Deep knowledge of PAM4 SerDes performance, equalization, jitter, crosstalk.

Solid operational understanding of FEC, Retimers, TIAs and Drivers.

Deep knowledge of optical link budget analysis and performance metrics including TDECQ, OMA, Tcode, stressed receiver sensitivity and associated diagnostics.

Expertise in transceiver components including CW lasers, SiPh PICs, EML, DSP, passive subassemblies, their failure modes and characterization.

Compensation & Benefits

Work on the interconnect fabric of the world’s largest and most advanced AI systems.

Influence the physical-layer design of multi-billion-dollar-scale compute clusters.

Opportunity to shape copper and optics strategy for the 1.6T → 3.2T transition.

Direct impact on the next wave of frontier AI models.

Base salary is just one part of our total rewards package at X, which also includes equity, comprehensive medical, vision, and dental coverage, access to a 401(k) retirement plan, short & long-term disability insurance, life insurance, and various other discounts and perks.

Responsibilities

xAI is building at a furious pace with the latest compute and switching hardware to help people understand the universe.  We are looking for exceptional ML Infrastructure Engineers with deep expertise in high-speed interconnect technologies to design, build, and optimize the network fabric that powers large-scale AI training and inference clusters.  This strategic role will drive innovation in high-bandwidth, low-latency, power-efficient interconnects critical for AI/ML clusters based on advanced computing platforms.

You will have the opportunity to work on all modalities of interconnects connecting GPUs and switches both inside and between data centers, including our primary front and backend networks that train Grok and that customers use for inference. Engineers will own all aspects from design and development to build and operations. You will be expected to define and improve team processes and to contribute to scaling and maintenance efforts.

You will focus on the physical layer and system-level integration of copper (ACC, AEC, CPC) and optical (FRO, LRO/TRO, LPO, AOC, CPO) interconnects that directly determine the performance, power efficiency, scale, and cost of next-generation AI/ML clusters.  This is a highly technical, hands-on role bridging ML cluster requirements with cutting-edge interconnect hardware — ideal for engineers who love both large-scale AI systems and the physics/engineering of 200G+ SerDes, PAM4, photonics, signal integrity and diagnostics.

Design, validate, and productize high-speed copper and optical connectivity solutions for AI clusters (100k+ GPU scale).

Own vendor due diligence and onboarding for new 1.6T products including AEC and pluggable optical transceivers (DR4/8, FR4) including rigorous bring-up & characterization.