AI Researcher, Core ML (Turbo)
Company: Together AI
Location: San Francisco
Posted on: April 1, 2026
|
|
|
Job Description:
About the Role The Turbo team sits at the intersection of
efficient inference (algorithms, architectures, engines) and
post?training / RL systems. We build and operate the systems behind
Together’s API, including high?performance inference and
RL/post?training engines that can run at production scale. Our
mandate is to push the frontier of efficient inference and
RL?driven training: making models dramatically faster and cheaper
to run, while improving their capabilities through RL?based
post?training (e.g., GRPO?style objectives). This work lives at the
interface of algorithms and systems: asynchronous RL, rollout
collection, scheduling, and batching all interact with engine
design, creating many knobs to tune across the RL algorithm,
training loop, and inference stack. Much of the job is modifying
production inference systems—for example, SGLang? or vLLM?style
serving stacks and speculative decoding systems such as
ATLAS—grounded in a strong understanding of post?training and
inference theory, rather than purely theoretical algorithm design.
You’ll work across the stack—from RL algorithms and training
engines to kernels and serving systems—to build and improve
frontier models via RL pipelines. People on this team are often
spiky: some are more RL?first, some are more systems?first. Depth
in one of these areas plus appetite to collaborate across (and grow
toward more full?stack ownership over time) is ideal. Requirements
We don’t expect anyone to check every box below. People on this
team typically have deep expertise in one or more areas and enough
breadth (or interest) to work effectively across the stack. The
closer you are to full?stack (inference post?training/RL systems),
the stronger the fit—but being spiky in one area and eager to grow
is absolutely okay. You might be a good fit if you: Have strong
expertise in at least one of the following, and are excited to
collaborate across (and grow into) the others: Systems?first
profile: Large?scale inference systems (e.g., SGLang, vLLM,
FasterTransformer, TensorRT, custom engines, or similar), GPU
performance, distributed serving. RL?first profile: RL /
post?training for LLMs or large models (e.g., GRPO, RLHF/RLAIF,
DPO?like methods, reward modeling), and using these to train or
fine?tune real models. Model architecture design for Transformers
or other large neural nets. Distributed systems / high?performance
computing for ML. Are comfortable working from algorithms to
engines: Strong coding ability in Python Experience profiling and
optimizing performance across GPU, networking, and memory layers.
Able to take a new sampling method, scheduler, or RL update and
turn it into a production?grade implementation in the engine and/or
training stack. Have a solid research foundation in your area(s) of
depth: Track record of impactful work in ML systems, RL, or
large?scale model training (papers, open?source projects, or
production systems). Can read new RL / post?training papers,
understand their implications on the stack, and design minimal,
correct changes in the right layer (training engine vs. inference
engine vs. data / API). Operate well as a full?stack problem
solver: You naturally ask: “Where in the stack is this really
bottlenecked?” You enjoy collaborating with infra, research, and
product teams, and you care about both scientific quality and
user?visible wins. Minimum qualifications 3 years of experience
working on ML systems, large?scale model training, inference, or
adjacent areas (or equivalent experience via research / open
source). Advanced degree in Computer Science, EE, or a related
field, or equivalent practical experience. Demonstrated experience
owning complex technical projects end?to?end. If you’re excited
about the role and strong in some of these areas, we encourage you
to apply even if you don’t meet every single requirement.
Responsibilities Advance inference efficiency end?to?end Design and
prototype algorithms, architectures, and scheduling strategies for
low?latency, high?throughput inference. Implement and maintain
changes in high?performance inference engines (e.g., SGLang? or
vLLM?style systems and Together’s inference stack), including
kernel backends, speculative decoding (e.g., ATLAS), quantization,
etc. Profile and optimize performance across GPU, networking, and
memory layers to improve latency, throughput, and cost. Unify
inference with RL / post?training Design and operate RL and
post?training pipelines (e.g., RLHF, RLAIF, GRPO, DPO?style
methods, reward modeling) where 90% of the cost is inference,
jointly optimizing algorithms and systems. Make RL and
post?training workloads more efficient with inference?aware
training loops—for example, async RL rollouts, speculative
decoding, and other techniques that make large?scale rollout
collection and evaluation cheaper. Use these pipelines to train,
evaluate, and iterate on frontier models on top of our inference
stack. Co?design algorithms and infrastructure so that objectives,
rollout collection, and evaluation are tightly coupled to efficient
inference, and quickly identify bottlenecks across the training
engine, inference engine, data pipeline, and user?facing layers.
Run ablations and scale?up experiments to understand trade?offs
between model quality, latency, throughput, and cost, and feed
these insights back into model, RL, and system design. Own critical
systems at production scale Profile, debug, and optimize inference
and post?training services under real production workloads. Drive
roadmap items that require real engine modification—changing
kernels, memory layouts, scheduling logic, and APIs as needed.
Establish metrics, benchmarks, and experimentation frameworks to
validate improvements rigorously. Provide technical leadership
(Staff level) Set technical direction for cross?team efforts at the
intersection of inference, RL, and post?training. Mentor other
engineers and researchers on full?stack ML systems work and
performance engineering. About Together AI Together AI is a
research-driven artificial intelligence company. We believe open
and transparent AI systems will drive innovation and create the
best outcomes for society, and together we are on a mission to
significantly lower the cost of modern AI systems by co-designing
software, hardware, algorithms, and models. We have contributed to
leading open-source research, models, and datasets to advance the
frontier of AI, and our team has been behind technological
advancement such as FlashAttention, Hyena, FlexGen, and RedPajama.
We invite you to join a passionate group of researchers in our
journey in building the next generation AI infrastructure.
Compensation We offer competitive compensation, startup equity,
health insurance and other competitive benefits. The US base salary
range for this full-time position is: $200,000 - $280,000 equity
benefits. Our salary ranges are determined by location, level and
role. Individual compensation will be determined by experience,
skills, and job-related knowledge. Equal Opportunity Together AI is
an Equal Opportunity Employer and is proud to offer equal
employment opportunity to everyone regardless of race, color,
ancestry, religion, sex, national origin, sexual orientation, age,
citizenship, marital status, disability, gender identity, veteran
status, and more. Please see our privacy policy at
https://www.together.ai/privacy
Keywords: Together AI, Modesto , AI Researcher, Core ML (Turbo), IT / Software / Systems , San Francisco, California