We use cookies. Find out more about it here. By continuing to browse this site you are agreeing to our use of cookies.
#alert
Back to search results
New

Principal Engineer Inference Stack

Advanced Micro Devices, Inc.
$226,400.00/Yr.-$339,600.00/Yr.
United States, California, Santa Clara
2485 Augustine Drive (Show on map)
Feb 03, 2026


WHAT YOU DO AT AMD CHANGES EVERYTHING

At AMD, our mission is to build great products that accelerate next-generation computing experiences-from AI and data centers, to PCs, gaming and embedded systems. Grounded in a culture of innovation and collaboration, we believe real progress comes from bold ideas, human ingenuity and a shared passion to create something extraordinary. When you join AMD, you'll discover the real differentiator is our culture. We push the limits of innovation to solve the world's most important challenges-striving for execution excellence, while being direct, humble, collaborative, and inclusive of diverse perspectives. Join us as we shape the future of AI and beyond. Together, we advance your career.

THE ROLE:

AMD is looking for a strategic software engineering lead who is passionate about improving the performance of key applications and benchmarks. You will be a member of a core team of incredibly talented industry specialists and will work with the very latest hardware and software technology.

THE PERSON:

The ideal candidate should be passionate about software engineering and possess leadership skills to drive sophisticated issues to resolution. Able to communicate effectively and work optimally with different teams across AMD.

KEY RESPONSIBILITIES:

  • Develop techniques for optimizing scale-up and scale-out inference.
  • Develop methods and tooling to utilize dynamic resources in service of inference
  • Support proliferation of rocm ecosystem.

PREFERRED EXPERIENCE:

  • Expertise in the K8s ecosystem, especially as it pertains to large scale inference
  • Operational experience with atleast one of sglang, or vllm and with kserve, llm-d. Experience running inference as a service can be substituted in-lieu of experience with frameworks such as kserve or llm-d.
  • Expertise with techniques used to optimize inference like distributed kv-cache, disaggregation, request scheduling etc
  • Ability to write high quality code with a keen attention to detail. Preferred languages are go and python.
  • Experience with modern concurrent programming
  • Effective communicatior with keen attention to detail.
  • Prior experience roadmapping deeply technical areas is highly valuable.

ACADEMIC CREDENTIALS:

  • Bachelor's or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent

This role is not eligible for visa sponsorship.

#LI-G11

#LI-HYBRID

Benefits offered are described: AMD benefits at a glance.

AMD does not accept unsolicited resumes from headhunters, recruitment agencies, or fee-based recruitment services. AMD and its subsidiaries are equal opportunity, inclusive employers and will consider all applicants without regard to age, ancestry, color, marital status, medical condition, mental or physical disability, national origin, race, religion, political and/or third-party affiliation, sex, pregnancy, sexual orientation, gender identity, military or veteran status, or any other characteristic protected by law. We encourage applications from all qualified candidates and will accommodate applicants' needs under the respective laws throughout all stages of the recruitment and selection process.

AMD may use Artificial Intelligence to help screen, assess or select applicants for this position. AMD's "Responsible AI Policy" is available here.

This posting is for an existing vacancy.

Applied = 0

(web-54bd5f4dd9-dz8tw)