Redmond, Washington, United States
17 hours ago
Research Intern - AI Frontiers - Agentic AI Models & Synthetic Data Generation

Research Internships at Microsoft provide a dynamic environment for research careers with a network of world-class research labs led by globally-recognized scientists and engineers, who pursue innovation in a range of scientific and technical disciplines to help solve complex challenges in diverse fields, including computing, healthcare, economics, and the environment.

The AI Frontier Lab at Microsoft Research is seeking Research Interns to advance the state of the art in foundational model post-training, specialization and aligning models to Agentic AI capabilities such as reasoning and computer use. Our work spans both synthetic data generation and algorithmic advances in model training recipes.

 

Our lab conducts cutting-edge research in artificial intelligenace (AI) and publishes findings in top-tier venues in AI and machine learning (ML), including NeurIPS, ICLR, ICML, and others. We release models and libraries focusing on small models (e.g. Phi-3, Orca), synthetic data generation (e.g. AgentInstruct), and Agentic AI (e.g. OmniParser, AutoGen). The lab also works within Microsoft's ecosystem to impact our AI-driven technologies in multiple products.

 

We seek Research Interns with demonstrated ability for technical work and a proven record of influential publications on AI. For this role you should have a keen interest in developing synthetic data generation methods or training algorithms that enable models to take actions in a virtual world and reason more effectively, while maintaining computational efficiency.

 

Research areas of particular interest for this team include, but are not limited to:

Developing methods to create high-quality data synthetically, with minimal human intervention.Agentic systems for data generation, selection and filtering.Developing novel training algorithms for enhancing reasoning and /or action taking.Reinforcement learning approaches for improving logical and mathematical reasoning.Exploring scaling laws between test-time and training-time compute.Advanced optimization techniques for efficient training of large-scale models.

Our group takes a holistic approach to improving foundational models that includes a variety of data modalities (language, vision, multi-modal, and structured data) and modern model architectures.

 

Priority will be given to candidates with a proven publication record in top-tier conferences, who have demonstrated the ability to develop original research and perform hands-on research, and who work well in a collaborative and dynamic environment.

Confirm your E-mail: Send Email