Skip to content
mimi

Staff Software Engineer, LLM Serving and GPU Performance, Google Distributed Cloud

Google

Kirkland · flexible Full-time $207k – $300k/yr Today

About the role

About The Job

Google Cloud’s mission is to make every business successful through AI by combining cutting-edge technology, infrastructure, and talent. AI/ML software engineers in Cloud bridge the gap between pioneering models and a massive product vehicle reaching billions. Our talent density and AI-powered tools drive rapid development, rooted in a culture of empowerment and a bias to action. In this role, you aren’t just building technology; you’re shaping the frontier of enterprise and driving the evolution of advanced models.

We engineer the future of Artificial Intelligence (AI) serving infrastructure at the intersection of Large Language Models (LLMs) and high-performance computing. Our team drives foundational gains in efficiency, latency, and throughput to scale Google’s most advanced models globally. In this role, you will build tools to maximize Large Language Model (LLM) performance on cutting-edge Graphics Processing Unit (GPU) platforms. You will develop next-generation disaggregated serving architectures and enable the seamless deployment of Gemini across Google’s products and Cloud infrastructure.

Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems.

The US base salary range for this full-time position is $207,000-$300,000 + bonus + equity + benefits. Our salary ranges are determined by role, level, and location. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training. Your recruiter can share more about the specific salary range for your preferred location during the hiring process.

Please note that the compensation details listed in US role postings reflect the base salary only, and do not include bonus, equity, or benefits. Learn more about benefits at Google .

Responsibilities

  • Build infrastructure and tooling for deep profiling, benchmarking, and analysis of large language models (LLMs) on graphics processing unit (GPU) accelerators.
  • Identify and resolve bottlenecks across compute, memory, and networking to maximize hardware efficiency.
  • Prototype serving techniques, including disaggregated serving, speculative decoding, and optimized key-value (KV) cache management.
  • Design and implement enhancements to the serving stack to improve latency, throughput, and resource utilization.
  • Partner with research, engineering, and site reliability engineering (SRE) teams to deploy models into production.

Minimum qualifications:

  • Bachelor's degree or equivalent practical experience.
  • 8 years of experience in software development.
  • 5 years of experience testing and launching software products.
  • 5 years of experience with performance, large scale systems data analysis, visualization tools, or debugging.
  • 3 years of experience with software design and architecture.
  • 3 years of experience in low-level systems optimization, including performance tuning for GPU or TPU accelerators or high-performance distributed AI serving infrastructure.

Preferred qualifications:

  • Master’s degree or PhD in engineering, computer science, or a related technical field.
  • 8 years of experience with data structures and algorithms.
  • 3 years of experience in a technical leadership role leading project teams and setting technical direction.
  • 3 years of experience working in a matrixed organization involving cross-functional or cross-business projects.
  • 3 years of experience leading the architecture and multi-quarter technical roadmap for AI inference or training systems, focusing on hardware-software co-design.
  • 3 years of experience implementing optimization techniques, such as quantization, speculative decoding, or memory management to reduce total cost of ownership.

Benefits

  • Health, dental, vision, life, disability insurance
  • Retirement Benefits: 401(k) with company match
  • Paid Time Off: 20 days of vacation per year, accruing at a rate of 6.15 hours per pay period for the first five years of employment
  • Sick Time: 40 hours/year (increased to 69 hours/year for Seattle) including 5 discretionary sick days per instance
  • Maternity Leave (Short-Term Disability + Baby Bonding): 28-30 weeks
  • Baby Bonding Leave: 18 weeks
  • Holidays: 13 paid days per year

Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Sunnyvale, CA, USA; Kirkland, WA, USA.

Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .

Skills

AIAI inferenceAI servingAI/MLarchitecturedata analysisdebuggingdisaggregated servingdistributed AIGeminiGPUGoogle Cloudhigh-performance computingkey-value cache managementLarge Language ModelsLLMlow-level systems optimizationmemory managementnetworkingperformance tuningprofilingquantizationservingsoftware designsoftware developmentspeculative decodingTPUtesting

Don't send a generic resume

Paste this job description into Mimi and get a resume tailored to exactly what the hiring team is looking for.

Get started free