Kolkata, India
56 days ago
Services Delivery Engineer - AWS AI

It's fun to work in a company where people truly BELIEVE in what they're doing!

Job Description: 

Ingram Micro touches 80% of the technology you use every day with our focus on Technology Solutions, Cloud, and Commerce and Lifecycle Solutions. With $50 billion in revenue, we have become the world’s largest technology distributor with operations in 64 countries and more than 35,000 associates.

Job Description: AI & Gen AI ML/Professional Engineering

Role: ML Engineer

Experience:

Total IT Experience: 2 to 4 Years AI Experience: 2 to 4 Years Gen AI Experience: 1 to 2 Years

Key Responsibilities:

Develop AI Solutions: Develop scalable and reliable AI solutionsAI Services Expertise: Proficient in AI services on Sagemaker, ML Studio, AI Studio, Vertex AI, IBM WatsonAI Architecture: Utilization of AI tools, techniques, and frameworks.Data Handling and Analysis: Expertise in RAG, Knowledge DB (Vector DB), Embedding, Indexing, Knowledge Graph, Cosine Similarity, and Searching.Gen AI Solutions Development: Design, develop, train/tune/transfer learning, validate, deploy, manage/monitor, and optimize Gen AI solutions.Transformer Architecture: Understanding of Transformer Architecture, including GAN, VAE, and VE.LLM Fine-Tuning and Prompt Engineering: Expertise in fine-tuning large language models (LLMs) for specific tasks, using prompt designing, engineering techniques such as few-shot learning and prompt chaining, and optimizing prompts.Multimodal AI Techniques: Knowledge of multimodal AI techniques, including vision-language models (e.g., CLIP, DALL-E), speech-language models, and multimodal fusion techniques.Responsible AI Practices: Understanding of responsible AI practices, techniques for detecting and mitigating bias in LLMs, and strategies for promoting fairness and transparency.Production Deployment of LLMs: Experience in deploying LLMs in production environments, including containerization, scaling, and monitoring techniques for LLM models and pipelines. Understanding of Nvidia NIM, Open AI, Huggin Face etc.Security and Privacy: Knowledge of security and privacy considerations when working with LLMs, such as data privacy, model extraction attacks, and techniques for secure model deployment.Continual Learning for LLMs: Understanding of continual learning techniques for LLMs, such as rehearsal, replay, and parameter isolation, to enable efficient adaptation to new data and tasks.Model Creation and Evaluation: Creating SLM,LLM and models using training and test datasets, evaluating models using algorithms, and performing hyperparameter tuning.Machine Learning: Proficient in sci-kit learn for supervised and unsupervised ML, including NLP, recommender systems, anomaly detection, and time series.Custom AI Solutions: Experience with custom object detection, speech recognition, image classification, and recognition.Deep Learning: Implementing deep learning scenarios on NVIDIA or GPU-based hardware. Build, train and test SLM/LLM/ModelData Engineering and Management:Design and manage data pipelines for AI applications using AI servicesExperience with Python, SQL, and knowledge of Java.cleaning data, transforming data, preparing dataDevOps/LLMOps Skills:Automate and streamline deployment pipelines for AI applications using  CodePipeline, CodeBuild, GitHub etc.Implement configuration management and infrastructure as code (IAC) using CloudFormation and Elastic Beanstalk.CI/CD automation, configuration management, and cloud solutions deployment.Experience with AWS Lambda, API Gateway, ECS, and monitoring tools like CloudWatch.

Security and Compliance:Ensure security and compliance of AI solutions.Implement VPC security best practices and maintain security compliance and governance on AWS.Development and Deployment:Develop, train, and deploy machine learning models on AI PlatformImplement AI cognitive services, embedding/vector DB, and Gen AI enterprise integration (e.g., Open AI, Hugging Face).AI and Gen AI:Expertise in machine learning and deep learning models, including Transformers and GPT.Experience with AI cognitive services and embedding/vector DB.Gen AI Enterprise Integration (Open AI, Hugging face, SAP, DFSC etc.)Gen AI Use Cases: Conversational, Summarization, Extraction, Document Processing, Task Automation, Enterprise Automation, RPA etc. in different Domains like Sales, Services, Customer Operations, Manufacturing, Logistics etc.Data Engineering:Design data pipelines, storage solutions, and database management.Utilize analytics tools like Glue, Athena, and QuickSight.Data Management using LLM.Security and Compliance:Proficiency implementing security compliance.Documentation:Prepare comprehensive documentation, including blueprints, explainable AI, and metrics.AI Strategy, Responsible AI, Risk Management Framework

Languages:

Proficiency in Python, PyTorch, TensorFlow, Langchain (Agents, Tools, Chain etc.) StreamLit/ChainLit, SQL, and working knowledge of Java.

Certifications (Any of the followings or Any AI Certification):

AWS Machine Learning SpecialtyAWS Data Engineer AssociateAWS Certified AI Practitioner

Desired Skills:

Strong designing, coding/development, documentation, discussion, and training skills.Up-to-date knowledge of the latest developments in the Gen AI world.

This role is crucial for driving Ingram Micro's AI and Gen AI initiatives, ensuring high-quality, scalable solutions that meet both internal and external requirements.

Ingram Micro is committed to creating a diverse environment and is proud to be an equal opportunity employer. We are dedicated to fostering an inclusive and accessible environment where all associates are valued, respected, and supported. We are highly driven by our tenets of success: Results, Integrity, Imagination, Responsibility, Courage, and Talent

Confirm your E-mail: Send Email