Sunnyvale, California, USA
17 days ago
Machine Learning Researcher, Multimodal Foundation Models
Imagine what you could do here. At Apple, new ideas have a way of becoming extraordinary products, services, and customer experiences very quickly. Bring passion and dedication to your job and there's no telling what you could accomplish. Multifaceted, amazing people and inspiring, innovative technologies are the norm here. The people who work here have reinvented entire industries with all Apple Hardware products. The same passion for innovation that goes into our products also applies to our practices strengthening our commitment to leave the world better than we found it. Join us in this truly exciting era of Artificial Intelligence to help deliver the next groundbreaking Apple product & experiences! As a member of our dynamic group, you will have the unique and rewarding opportunity to craft upcoming research directions in the field of multimodal foundation models that will inspire the future Apple products. We are continuously advancing the state of the art in Computer Vision and Machine Learning. You will be working alongside highly accomplished and deeply technical scientists and engineers to develop state of the art solution for challenging problems. We are touching all aspects of language and multimodal foundation models, from data collection, data curation to modeling, evaluation and deployment. This is a unique opportunity to be part of what forms the future of Apple products that will touch the life of many people. We (Spatial Perception Team) looking for a machine learning researcher to work on the field of Generative AI and multi-modal foundation models. Our team has an established track record of shipping features that leverages multiple sensors, such as FaceID, RoomPlan and hand tracking in VisionPro. We are focused on building experiences that leverages the power of our sensing hardware as well as large foundation models.
Confirm your E-mail: Send Email