at Affine.io (View all jobs)
Location: Remote
About Affine
Affine is building an incentivized RL environment that pays miners for incremental improvements on tasks like program synthesis and coding. Operating on Bittensor's Subnet 120, we’ve created a sybil-proof, decoy-proof, copy-proof, and overfitting-proof mechanism that rewards genuine model improvements. Our vision is to commoditize reasoning—the highest form of intelligence—by directing and aggregating the work of a large, non-permissioned group on RL tasks to break the intelligence sound barrier.
Overview
We’re looking for research-minded engineers who can push the frontier of reinforcement learning, program synthesis, and reasoning agents inside Affine’s competitive RL environments. This role is about experimentation and discovery: designing new post-training methods, exploring agent architectures, and proving them in live competitive benchmarks. You’ll take cutting-edge theory (GRPO, PPO, multi-objective RL, program abduction) and turn it into working systems that miners can improve, validate, and monetize through Affine, Bittensor’s Subnet-120.
This is a rare opportunity to help reshape how AI is trained, evaluated, and aligned in a decentralized ecosystem. The position is ideal for someone who thrives at the intersection of research and engineering—able to prototype novel algorithms quickly, evaluate them rigorously, and scale them into production pipelines that feed back into Affine’s incentive system.
Responsibilities
Annual Salary Range: $150,000 USD - $380,000 USD