SoC Modeling Engineering Team Lead, Annapurna Labs, Machine Learning Accelerators
Amazon.com
AWS Utility Computing (UC) provides product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Additionally, this role may involve exposure to and experience with Amazon's growing suite of generative AI services and other cutting-edge cloud computing offerings across the AWS portfolio.
Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
Custom SoCs (system-on-chips) are the brains behind AWS’s Machine Learning servers. Our team builds C++ models of these accelerator SoCs for use by internal partner teams. The modeling team currently develops functional models, but we’re expanding to include performance modeling for key SoC components. We’re looking for a Sr. Modeling Engineer to lead the team in architecting then delivering on new functional and performance models, infrastructure, and tooling for our customers.
As the ML accelerator modeling team lead, you will:
- Chart the technical direction for the modeling team
- Build and manage a small, strong team of 3-5 modeling engineers
- Develop and own functional and performance models end-to-end, including model architecture, integration with other model or infrastructure components, testing, correlation, and debug
- Drive model and modeling infrastructure performance improvements
- Work closely with architecture, rtl design, design-verification, emulation, and software teams
- Innovate on the tooling you provide to customers, making it easier for them to use our SoC models
- Develop software which can be maintained, improved upon, documented, tested, and reused
Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled, tested, and correlated with high quality. The model is a critical piece of software used in both our SoC and SW stack development processes. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.
You will thrive in this role if you:
- Are an expert in performance modeling for SoCs, ASICs, GPUs, or CPUs
- Enjoy building, managing, and leading small teams
- Are comfortable modeling in C++ and familiar with Python
- Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization
- Want to jump into an ML role, or get deeper into the details of ML at the system-level
Although we are building machine learning chips, no machine learning background is needed for this role. This role spans modeling of the ML and management regions of our chips, and you’ll dip your toes into both. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.
This role can be based in either Cupertino, CA or Austin, TX. The team is split between the two sites, with no preference for one over the other.
This is a fast-paced role where you'll work with thought-leaders in multiple technology areas. You'll have high standards for yourself and everyone you work with, and you'll be constantly looking for ways to improve your software, as well as our products' overall performance, quality, and cost.
We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!
About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
About AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Annapurna Labs (our organization within AWS UC) designs silicon and software that accelerates innovation. Customers choose us to create cloud solutions that solve challenges that were unimaginable a short time ago—even yesterday. Our custom chips, accelerators, and software stacks enable us to take on technical challenges that have never been seen before, and deliver results that help our customers change the world.
Custom SoCs (system-on-chips) are the brains behind AWS’s Machine Learning servers. Our team builds C++ models of these accelerator SoCs for use by internal partner teams. The modeling team currently develops functional models, but we’re expanding to include performance modeling for key SoC components. We’re looking for a Sr. Modeling Engineer to lead the team in architecting then delivering on new functional and performance models, infrastructure, and tooling for our customers.
As the ML accelerator modeling team lead, you will:
- Chart the technical direction for the modeling team
- Build and manage a small, strong team of 3-5 modeling engineers
- Develop and own functional and performance models end-to-end, including model architecture, integration with other model or infrastructure components, testing, correlation, and debug
- Drive model and modeling infrastructure performance improvements
- Work closely with architecture, rtl design, design-verification, emulation, and software teams
- Innovate on the tooling you provide to customers, making it easier for them to use our SoC models
- Develop software which can be maintained, improved upon, documented, tested, and reused
Annapurna Labs, our organization within AWS, designs and deploys some of the largest custom silicon in the world, with many subsystems that must all be modeled, tested, and correlated with high quality. The model is a critical piece of software used in both our SoC and SW stack development processes. You’ll collaborate with many internal customers who depend on your models to be effective themselves, and you'll work closely with these teams to push the boundaries of how we're using modeling to build successful products.
You will thrive in this role if you:
- Are an expert in performance modeling for SoCs, ASICs, GPUs, or CPUs
- Enjoy building, managing, and leading small teams
- Are comfortable modeling in C++ and familiar with Python
- Enjoy learning new technologies, building software at scale, moving fast, and working closely with colleagues as part of a small team within a large organization
- Want to jump into an ML role, or get deeper into the details of ML at the system-level
Although we are building machine learning chips, no machine learning background is needed for this role. This role spans modeling of the ML and management regions of our chips, and you’ll dip your toes into both. You’ll be able to ramp up on ML as part of this role, and any ML knowledge that’s required can be learned on-the-job.
This role can be based in either Cupertino, CA or Austin, TX. The team is split between the two sites, with no preference for one over the other.
This is a fast-paced role where you'll work with thought-leaders in multiple technology areas. You'll have high standards for yourself and everyone you work with, and you'll be constantly looking for ways to improve your software, as well as our products' overall performance, quality, and cost.
We're changing an industry. We're searching for individuals who are ready for this challenge, who want to reach beyond what is possible today. Come join us and build the future of machine learning!
About the team
Our team is dedicated to supporting new members. We have a broad mix of experience levels and tenures, and we’re building an environment that celebrates knowledge-sharing and mentorship. Our senior members enjoy one-on-one mentoring and thorough, but kind, code reviews. We care about your career growth and strive to assign projects that help our team members develop your engineering expertise so you feel empowered to take on more complex tasks in the future.
Diverse Experiences
AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
About AWS
Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses.
Inclusive Team Culture
Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness.
Work/Life Balance
We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud.
Mentorship & Career Growth
We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional.
Confirm your E-mail: Send Email
All Jobs from Amazon.com