Role Proficiency:
Act under guidance of DevOps; leading more than 1 Agile team.
Outcomes:
Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates relevant DevOps solutions for new contexts Codes debugs tests and documents and communicates DevOps development stages/status of DevOps develop/support issues Selects appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install and troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Good understanding of Agile methodologies and is able to work with diverse teams Knowledge of more than 1 DevOps toolstack (AWS Azure GCP opensource)Measures of Outcomes:
Quality of Deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA/KPI for onboarding projects or applications Stakeholder Management Percentage achievement of specification/completeness/on-time deliveryOutputs Expected:
Automated components :
Deliver components that automates parts to install components/configure of software/tools in on premises and on cloud Deliver components that automates parts of the build/deploy for applications
Configured components:
Scripts:
Training/SOPs :
Measure Process Efficiency/Effectiveness:
innovation and technology changes.
Operations:
Skill Examples:
Experience in design installation and configuration to to troubleshoot CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python Linux/Shell Perl Groovy PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Poweshell) Experience in repository Management/Migration Automation – GIT BitBucket GitHub Clearcase Experience in build automation scripts – Maven Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS Azure Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps ARM (Azure Resource Manager) & DSC (Desired State Configuration) & Strong debugging skill in C# C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker & KubernetesKnowledge Examples:
Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS Azure Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build and release Branching/Merging Knowledge about containerization Knowledge of Agile methodologies Knowledge of software security compliance (GDPR/OWASP) and tools (Blackduck/ veracode/ checkmarxs)Additional Comments:
DevOps SRE Engineer Required Skills: Build, deliver and support data solutions and pipelines on AWS cloud platform Maintain and troubleshoot AWS production environment as an SRE Engineer Use tools like Terraform and Terragrunt for infrastructure as code (IaC) Implement and manage CI/CD deployments and DevOps capabilities in a data environment Ensure data quality and availability for customers and understand the business impact of technical issues Write python and pyspark code Debug issues with AWS services such as Appflow, Glue, Crawler, Glue Catalog, Delta tables, Lambdas, Lake formation, RDS, Redshift, IAM, SNS, SQS, S3 and Airflow Follow Software Development Processes and Agile methodologies Design and operate scalable, reliable, secure and high-performance AWS platform Handle large and complex data ingestion to Cloud (AWS), involving batch and real-time data Resolve production issues and ensure zero downtime of AWS service/data availability Qualifications: Production environment support experience Bachelor's degree in Computer Science, Engineering, or a related field Strong knowledge of AWS services and cloud architecture Excellent problem-solving skills and attention to detail Strong communication and collaboration abilities Good to have: An in-depth understanding of large-scale data sets including both structured and unstructured data Experience with Harness for CI/CD and other capabilities Apply data warehousing concepts to support enterprise data warehouse