Software Development Manager - Compiler, AWS Neuron, Annapurna Labs
Amazon.com
The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by a cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet. The Neuron SDK optimizes performance of complex neural net models executed on AWS Inferentia and Trainium. AWS Neuron is used at scale with customers and partners like PyTorch, Epic Games, Snap, AirBnB, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The Team: The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers. We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations. With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History:
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
You: As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium. You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers. You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products/optimizations/features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://github.com/aws/aws-neuron-sdk
https://aws.amazon.com/machine-learning/neuron/
https://aws.amazon.com/machine-learning/neuron/
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
Key job responsibilities
The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by a cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet. The Neuron SDK optimizes performance of complex neural net models executed on AWS Inferentia and Trainium. AWS Neuron is used at scale with customers and partners like PyTorch, Epic Games, Snap, AirBnB, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The Team: The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers. We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations. With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History:
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
You: As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium. You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers. You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products/optimizations/features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://github.com/aws/aws-neuron-sdk
https://aws.amazon.com/machine-learning/neuron/
https://aws.amazon.com/machine-learning/neuron/
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
The Team: The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers. We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations. With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History:
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
You: As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium. You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers. You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products/optimizations/features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://github.com/aws/aws-neuron-sdk
https://aws.amazon.com/machine-learning/neuron/
https://aws.amazon.com/machine-learning/neuron/
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
Key job responsibilities
The Product: AWS Machine Learning accelerators are at the forefront of AWS innovation. The Inferentia chip delivers best-in-class ML inference performance at the lowest cost in the cloud. Trainium will deliver the best-in-class ML training performance with the most teraflops (TFLOPS) of compute power for ML in the cloud. This is all enabled by a cutting edge software stack, the AWS Neuron Software Development Kit (SDK), which includes an ML compiler, runtime and natively integrates into popular ML frameworks, such as PyTorch, TensorFlow and MxNet. The Neuron SDK optimizes performance of complex neural net models executed on AWS Inferentia and Trainium. AWS Neuron is used at scale with customers and partners like PyTorch, Epic Games, Snap, AirBnB, Autodesk, Amazon Alexa, Amazon Rekognition and more customers in various other segments.
The Team: The Amazon Annapurna Labs team is responsible for building innovation in silicon and software for AWS customers. We are at the forefront of innovation by combining cloud scale with the world’s most talented engineers. Our team covers multiple disciplines including silicon engineering, hardware design and verification, software and operations. With such breadth of talent, there's opportunity to learn all of the time. We operate in spaces that are very large, yet our teams remain small and agile. There is no blueprint. We're inventing. We're experimenting. When you couple that with the ability to work on so many different products and services, it's a very unique learning culture.
Learn more about Our History:
https://www.amazon.science/how-silicon-innovation-became-the-secret-sauce-behind-awss-success
You: As a Manager III on the AWS Neuron team, you'll be leading a team of compiler engineers through developing, deploying, and scaling a compiler targeting AWS Inferentia and Trainium. You'll need to be technically capable, credible and curious in your own right as a trusted AWS Neuron Manager, innovating on behalf of our customers. You’ll leverage your vision and technical communication skills as a hands-on partner to AWS ML services teams, to be involved in pre-silicon design, bring new products/optimizations/features to market, and many other exciting projects to ensure the Neuron SDK exceeds our customers' needs of high performance, low cost, and ease of use.
You will have deep knowledge of resource management, scheduling, code generation, optimization, and new instruction architectures including CPU, NPU, GPU and novel forms of compute.
Explore the Product:
https://awsdocs-neuron.readthedocs-hosted.com/en/latest/neuron-guide/neuron-cc/index.html
https://github.com/aws/aws-neuron-sdk
https://aws.amazon.com/machine-learning/neuron/
https://aws.amazon.com/machine-learning/neuron/
In order to be considered for this role, candidates must be currently located or willing to relocate to Toronto.
Confirm your E-mail: Send Email
All Jobs from Amazon.com