Software Engineer - Machine Learning Training - Singapore

BYTEDANCE PTE. LTD.
8 days ago
Posted date8 days ago
N/A
Minimum levelN/A
Human ResourcesJob category
Human ResourcesAbout Us
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
About the team
The ByteDance Large Model Team is committed to developing the most advanced AI large model technology in the industry, becoming a world-class research team, and contributing to technological and social development. The Large Model Team has a long-term vision and determination in the field of AI, with research directions covering NLP, CV, speech, and other areas. Relying on the abundant data and computing resources of the platform, the team has continued to invest in relevant fields and has launched its own general large model, providing multi-modal capabilities.
The Machine Learning (ML) System sub-team combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and inference system/services around the world, providing high-performance, highly reliable, scalable systems for LLM/AIGC/AGI.
In our team, you'll have the opportunity to build the large scale heterogeneous system integrating with GPU/NPU/RDMA/Storage and keep it running steadily and reliably, enrich your expertise in coding, performance analysis and distributed system, and be involved in the decision-making process. You'll also be part of a global team with members from the United States, China and Singapore working collaboratively towards unified project direction.
Responsibilities:
- Responsible for building the future generation of SFT/RL training framework
- Responsible for optimizing e2e LLM/AIGC training efficiency, including reducing memory usage, tuning model parallelism strategies, etc.
- Responsible for making the training framework easy-to-adopt with good out-of-box performance
- Collaborate with critical product teams to enable their LLM/AIGC products landing
Qualifications
Minimum Qualifications:
- Bachelor's degree or above, major in computer/electronics/automation/software or related;
- At least 3 years of working experiences in C/C++, proficient in algorithms and data structures, familiar with Python
- Understand the basic principles of deep learning algorithms, be familiar with the basic architecture of neural networks and understand deep learning training frameworks such as PyTorch and TensorFlow
- Have a strong sense of responsibility, good learning ability, communication ability and self-drive, good team spirit
Preferred Qualifications:
- Proficient in GPU high-performance computing optimization technology on CUDA, in-depth understanding of computer architecture, familiar with parallel computing optimization, memory access optimization, low-bit computing, etc.
- Familiar with FSDP, Deepspeed, Megatron, etc.
- Strong Knowledge of LLM models, experience in accelerating LLM model optimization
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
Founded in 2012, ByteDance's mission is to inspire creativity and enrich life. With a suite of more than a dozen products, including TikTok, Lemon8, CapCut and Pico as well as platforms specific to the China market, including Toutiao, Douyin, and Xigua, ByteDance has made it easier and more fun for people to connect with, consume, and create content.
Why Join ByteDance
Inspiring creativity is at the core of ByteDance's mission. Our innovative products are built to help people authentically express themselves, discover and connect - and our global, diverse teams make that possible. Together, we create value for our communities, inspire creativity and enrich life - a mission we work towards every day.
As ByteDancers, we strive to do great things with great people. We lead with curiosity, humility, and a desire to make impact in a rapidly growing tech company. By constantly iterating and fostering an "Always Day 1" mindset, we achieve meaningful breakthroughs for ourselves, our Company, and our users. When we create and grow together, the possibilities are limitless. Join us.
Diversity & Inclusion
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
About the team
The ByteDance Large Model Team is committed to developing the most advanced AI large model technology in the industry, becoming a world-class research team, and contributing to technological and social development. The Large Model Team has a long-term vision and determination in the field of AI, with research directions covering NLP, CV, speech, and other areas. Relying on the abundant data and computing resources of the platform, the team has continued to invest in relevant fields and has launched its own general large model, providing multi-modal capabilities.
The Machine Learning (ML) System sub-team combines system engineering and the art of machine learning to develop and maintain massively distributed ML training and inference system/services around the world, providing high-performance, highly reliable, scalable systems for LLM/AIGC/AGI.
In our team, you'll have the opportunity to build the large scale heterogeneous system integrating with GPU/NPU/RDMA/Storage and keep it running steadily and reliably, enrich your expertise in coding, performance analysis and distributed system, and be involved in the decision-making process. You'll also be part of a global team with members from the United States, China and Singapore working collaboratively towards unified project direction.
Responsibilities:
- Responsible for building the future generation of SFT/RL training framework
- Responsible for optimizing e2e LLM/AIGC training efficiency, including reducing memory usage, tuning model parallelism strategies, etc.
- Responsible for making the training framework easy-to-adopt with good out-of-box performance
- Collaborate with critical product teams to enable their LLM/AIGC products landing
Qualifications
Minimum Qualifications:
- Bachelor's degree or above, major in computer/electronics/automation/software or related;
- At least 3 years of working experiences in C/C++, proficient in algorithms and data structures, familiar with Python
- Understand the basic principles of deep learning algorithms, be familiar with the basic architecture of neural networks and understand deep learning training frameworks such as PyTorch and TensorFlow
- Have a strong sense of responsibility, good learning ability, communication ability and self-drive, good team spirit
Preferred Qualifications:
- Proficient in GPU high-performance computing optimization technology on CUDA, in-depth understanding of computer architecture, familiar with parallel computing optimization, memory access optimization, low-bit computing, etc.
- Familiar with FSDP, Deepspeed, Megatron, etc.
- Strong Knowledge of LLM models, experience in accelerating LLM model optimization
ByteDance is committed to creating an inclusive space where employees are valued for their skills, experiences, and unique perspectives. Our platform connects people from across the globe and so does our workplace. At ByteDance, our mission is to inspire creativity and enrich life. To achieve that goal, we are committed to celebrating our diverse voices and to creating an environment that reflects the many communities we reach. We are passionate about this and hope you are too.
JOB SUMMARY
Software Engineer - Machine Learning Training - Singapore

BYTEDANCE PTE. LTD.
Singapore
8 days ago
N/A
Full-time
Software Engineer - Machine Learning Training - Singapore