For Employers
Distributed Training & Inference Optimization Engineer (LLM)


RAKUTEN ASIA PTE. LTD.
9 days ago
Posted date
9 days ago
N/A
Minimum level
N/A
Full-timeEmployment type
Full-time
About Rakuten

Rakuten group has almost 100 million customers based in Japan and 1 billion globally as well, providing more than 70 services in a variety such as e-commerce, payment services, financial services, telecommunication, media, sports, etc.

Division Introduction

AI & Data Division (AIDD) spearheads data science & AI initiatives by leveraging data from Rakuten Group. We build a platform for large-scale field experimentations using cutting-edge technologies to provide critical insights that enable faster and better and faster contribution for our business. Our division boasts an international culture created by talented employees from around the world. Following the strategic vision "Rakuten as a data-driven membership company", AIDD is expanding its data & AI related activities across multiple Rakuten Group companies.

About the Role

As a GPU Training & Inference Optimization Engineer, you will focus on maximizing the performance, efficiency, and scalability of LLM training and inference workloads on Rakuten's GPU clusters. You will deeply optimize training frameworks (e.g., PyTorch, DeepSpeed, FSDP) and inference engines (e.g., vLLM, TensorRT-LLM, Triton, SGLang), ensuring Rakuten's AI models run at peak efficiency.

This role requires strong expertise in GPU-accelerated ML frameworks, distributed training, and inference optimization, with a focus on reducing training time, improving GPU utilization, and minimizing inference latency.

Key Responsibilities
  • Optimize LLM training frameworks (e.g., PyTorch, DeepSpeed, Megatron-LM, FSDP) to maximize GPU utilization and reduce training time.
  • Profile and optimize distributed training bottlenecks (e.g., NCCL issues, CUDA kernel efficiency, communication overhead).
  • Implement and tune inference optimizations (e.g., quantization, dynamic batching, KV caching) for low-latency, high-throughput LLM serving (vLLM, TensorRT-LLM, Triton, SGLang).
  • Collaborate with infrastructure teams to improve GPU cluster scheduling, resource allocation, and fault tolerance for large-scale training jobs.
  • Develop benchmarking tools to measure and improve training throughput, memory efficiency, and inference latency.
  • Research and apply cutting-edge techniques (e.g., mixture-of-experts, speculative decoding) to optimize LLM performance.

Mandatory Qualifications
  • 3+ years of hands-on experience in GPU-accelerated ML training & inference optimization, preferably for LLMs or large-scale deep learning models.
  • Deep expertise in PyTorch, DeepSpeed, FSDP, or Megatron-LM, with experience in distributed training optimizations.
  • Strong knowledge of LLM inference optimizations (e.g., quantization, pruning, KV caching, continuous batching).
  • Bachelor's or higher degree in Computer Science, Engineering, or related field.

Nice-to-Have Skills
  • Proficiency in CUDA, Triton kernel, NVIDIA tools (Nsight, NCCL), and performance profiling (e.g., PyTorch Profiler, TensorBoard).
  • Experience with LLM-specific optimizations (e.g., FlashAttention, PagedAttention, LoRA, speculative decoding).
  • Familiarity with Kubernetes (K8s) for GPU workloads (e.g., KubeFlow, Volcano).
  • Contributions to open-source ML frameworks (e.g., PyTorch, DeepSpeed, vLLM).
  • Experience with inference serving frameworks (e.g., vLLM, TensorRT-LLM, Triton, Hugging Face TGI).

Why Join Us?
  • Work on cutting-edge LLM training & inference optimization at scale.
  • Directly impact Rakuten's AI infrastructure by improving efficiency and reducing costs.
  • Collaborate with global AI/ML teams on high-impact challenges.
  • Opportunity to research and implement state-of-the-art GPU optimizations.
Related tags
-
JOB SUMMARY
Distributed Training & Inference Optimization Engineer (LLM)
RAKUTEN ASIA PTE. LTD.
Singapore
9 days ago
N/A
Full-time

Distributed Training & Inference Optimization Engineer (LLM)