Jobs
Interviews

453 Iac Jobs - Page 10

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

12 - 15 Lacs

Bengaluru

Hybrid

Job Description We are seeking a skilled and proactive AWS DevOps Engineer to join our growing team. You will be responsible for managing scalable infrastructure, automating deployments, monitoring environments, and ensuring optimal performance and security across cloud-based systems. If you're passionate about automation, cloud technologies, and system reliability wed love to hear from you! Key Responsibilities Design, manage, and optimize AWS infrastructure components (EC2, S3, RDS, IAM, VPC, Lambda, etc.). Develop and maintain automation scripts using Bash , Python , or PowerShell for operations, deployments, and monitoring. Implement monitoring and alerting systems using CloudWatch , Datadog , Prometheus , or similar tools. Automate infrastructure provisioning through Infrastructure as Code (IaC) tools like Terraform , CloudFormation , or AWS CDK . Enforce security best practices (IAM policies, encryption, logging, patch management). Manage incident response, conduct root cause analysis, and resolve production issues efficiently. Support and enhance CI/CD pipelines using tools like Jenkins , AWS CodePipeline , GitHub Actions , etc. Monitor and optimize cost, performance, and resource utilization across environments. Ensure robust backup and disaster recovery strategies for cloud workloads. Participate in on-call rotations and respond to high-priority alerts when necessary. Nice to Have AWS Certifications : AWS Certified SysOps Administrator or Solutions Architect. Experience with Kubernetes , ECS , or EKS . Familiarity with Ansible , Chef , or other configuration management tools. Exposure to multi-cloud or hybrid-cloud environments. Experience working in regulated environments (e.g., healthcare, finance, government). Why Join Us? Opportunity to work with a high-performing, collaborative DevOps team. Exposure to cutting-edge cloud technologies. Dynamic work culture with a strong emphasis on innovation and continuous learning. Interested candidates can apply here or send your resume to srinivas.appana@relevancelab.com

Posted 1 month ago

Apply

3.0 - 6.0 years

8 - 13 Lacs

Gurgaon, Haryana, India

On-site

The Lead MLOps Engineer will be responsible for leading technology initiatives aimed at improving business value andoutcomes in the areas of digital marketing and commercial analytics through the adoption of Artificial Intelligence (AI) enabled solutions. Working with cross-functional teams across AI projects to operationalize data science models to deployed scalable solutions delivering business value. They should be inquisitive and bring an innovate mindset to work every day, researching, proposing, and implementing MLOps process improvements, solution ideas and ways of working to be more agile, lean and productive. Provide leadership and technical expertise in operationalizing machine learning models, bridging the gap between data science and IT operations. Key responsibilities include designing, implementing, and optimizing MLOps infrastructure, building CI/CD pipelines for ML models, and ensuring the security and scalability of ML systems. Key Responsibilities Architect & Deploy: Design and manage scalable ML infrastructure on Azure (AKS), leveraging Infrastructure as Code principles. Automate & Accelerate: Build and optimize CI/CD pipelines with GitHub Actions for seamless software, data, andmodel delivery. Engineer Performance: Develop efficient and reliable data pipelines using Python and distributed computing frameworks. Ensure Reliability: Implement solutions for deploying and maintaining ML models in production. Collaborate & Innovate: Partner with data scientists and engineers to continuously enhance existing MLOps capabilities. Key Competencies: Experience: A minimum of 5+ years of experience in software engineering, data science, or a related field with experience in MLOps is typically required. Education: A bachelor's or master's degree in Computer Science / Engineering. Soft Skills: Strong analytical and problem-solving skills, excellent communication and collaboration skills, and the ability to work in a fast-paced environment are highly valued. Azure & AKS: Deep hands-on experience. IaC & CI/CD: Mastery of Terraform/Bicep & GitHub Actions. Data Engineering: Advanced Python & Spark for complex pipelines. ML Operations: Proven ability in model serving & monitoring. Problem Solver: Adept at navigating complex technical challenges and delivering solutions.

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Requisition ID # 25WD89379 Position Overview Join Autodesk as a Senior Observability Engineer driving the architecture, scale, and evolution of our global observability platform engineering team. You will lead technical strategy, platform innovation, and cross-functional collaboration to elevate telemetry across the engineering org. Responsibilities Architect scalable, secure, and resilient logging platforms across hybrid/multi-cloud Lead OpenTelemetry adoption with standardized instrumentation and deployment models Define robust onboarding strategies for cloud, hybrid, and edge telemetry sources Integrate observability tooling (Splunk, Dynatrace) to deliver unified insights Evaluate emerging technologies develop custom observability solutions as needed Contribute to open-source or internal observability tooling and standards Drive documentation, training, and cross-team knowledge sharing Collaborate with app teams to embed observability into new service architectures Work with security teams on logging compliance, threat detection, and governance Partner with platform teams on CI/CD observability integration and enterprise telemetry architecture Minimum Qualifications Bachelor's in CS, Engineering, or related field 5-8 years in Observability, SRE, or DevOps with deep logging platform expertise Advanced skills in Splunk (admin, dev, architect), and OpenTelemetry in production Strong Python or Go development background expert in Linux and networking Hands-on with AWS (EC2, ECS, Lambda, S3) and containerized environments (Kubernetes, service mesh) Proven experience in designing large-scale, secure observability systems Preferred Qualifications Splunk Admin, Architect, or Developer certifications Contributions to OpenTelemetry or other open-source observability projects Experience with multiple platforms (Datadog, Elastic, Prometheus, New Relic) Exposure to machine learning in observability (e.g., predictive alerting) Familiarity with logging-related compliance standards (SOC2, GDPR, PCI) Proficient with IaC and GitOps-based deployments Background in multi-cloud and hybrid cloud telemetry strategies Technical leadership or project ownership experience Strong communicator with writing/speaking experience in observability forums #LI-MR2 Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software - from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk - our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you're an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future Join us! Salary transparency Salary is one part of Autodesk's competitive compensation package. Offers are based on the candidate's experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: Are you an existing contractor or consultant with Autodesk Please search for open jobs and apply internally (not on this external site).

Posted 1 month ago

Apply

4.0 - 7.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Job Purpose and Impact. The Cloud Security Engineer will help solidify foundation for the company's modern business applications. In this role, you will apply your knowledge of cybersecurity and cloud engineering practices to secure and operate Infrastructure as a Service and Platform as a Service used by our data and application teams to drive business value.. Key Accountabilities. Implement, and maintain security solutions for an enterprise-scale platform. Lead and complete critical projects within the security engineering space. Identify and resolve security issues across the cloud infrastructure. Assess our current cloud security posture and propose innovative solutions into existing systems and processes. Work closely with cloud architects, engineers, and other stakeholders to integrate security solutions seamlessly into existing systems and processes. Create and maintain comprehensive documentation for complex security services. Qualifications. Minimum requirement of 2 years of relevant work experience. Typically reflects 3 years or more of relevant experience.. Experience with Infrastructure as Code (IaC) solutions such as Terraform and CloudFormation. Experience using CI/CD pipelines for change management and automated security testing. Fluent in one or more programming or scripting language for automation. Strong communication and collaboration skills. Strong analytical problem-solving skills. Experience deploying services in a multi-cloud environment. Knowledge of networking and web protocols. Knowledge of security concepts (with hands-on container security). Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

6 - 10 Lacs

Chennai

Work from Office

hackajob is collaborating with Comcast to connect them with exceptional tech professionals for this role.. Cloud Engineer 3. Location Chennai, India. Job Summary. Responsible for planning and designing new software and web applications. Analyzes, tests and assists with the integration of new applications. Documents all development activity. Assists with training non-technical personnel. Has in-depth experience, knowledge and skills in own discipline. Usually determines own work priorities. Acts as a resource for colleagues with less experience.. Job Description. Core Responsibilities. Job Description. Position: Cloud DevOps Engineer 3. Experience: 5 years to 7 years. Job Location: Chennai Tami Nadu. HR Contact: Ramesh_M2@comcast.com. Technical Skills:. Must Have: Python, Terraform, Docker and Kubernetes, CICD, AWS, Bash, Linux/Unix, Git, DBMS (e.g. MySQL), NoSQL (e.g. MongoDB). Good to have: Ansible, Helm, Prometheus, ELK stack, R, GCP/Azur. Key Responsibilities. Design, build, and maintain efficient, reusable, and reliable code. Work with analysis, operations, and test teams to achieve the best possible outcome within time and budget. Troubleshoot infrastructure issues. Attend cloud engineering meetings. Participate in code reviews and quality assurance activities. Participate in estimation discussions with the product team. Continuously improve knowledge and coding skills. Qualifications & Requirements. Bachelor’s degree in computer science, Engineering, or a related field.. experience in a scripting language (e.g. Bash, Python). 3+ years of hands-on experience with Docker and Kubernetes. 3+ years of hands-on experience with CI tools (e.g. Jenkins, GitLab CI, GitHub Actions, Concourse CI, ...). 2+ years of hands-on experience with CD tools (e.g. ArgoCD, Helm, kustomize). 2+ years of hands-on experience with LINUX/UNIX systems. 2+ years of hands-on experience with cloud providers (e.g. AWS, GCP, Azure). 2+ years of hands-on experience with one IAC framework (e.g. Terraform, Pulumi, Ansible). Basic knowledge of virtualization technologies (e.g. VMware) is a plus. Basic knowledge of one database (MySQL, SQL Server, Couchbase, MongoDB, Redis, ...) is a plus. Basic knowledge of GIT and one Git Provider (e.g. GitLab, GitHub). Basic knowledge of networking. Experience writing technical documentation.. Good Communication & Time Management Skills.. Able to work independently and as part of a team.. Analytical thinking & Problem-Solving Attitude.. Disclaimer. This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications.. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law.. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality to help support you physically, financially and emotionally through the big milestones and in your everyday life.. Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast.. Education. Bachelor's Degree. While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience.. Relevant Work Experience. 5-7 Years. Show more Show less

Posted 1 month ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Pune

Work from Office

Join us as a Windows Server Engineer at Barclays, responsible for supporting the successful delivery of Location Strategy projects to plan, budget, agreed quality and governance standards. You'll spearhead the evolution of our digital landscape, driving innovation and excellence. You will harness cutting-edge technology to revolutionise our digital offerings, ensuring unparalleled customer experiences.. To be successful as a Windows Server Engineer you should have experience with:. Technical Leadership: Able to demonstrate technical expertise for the Windows Server product and specifically, building Ansible solutions on Windows Server. Is able to make decisions both tactically and strategically, solve problems and continuously improve teams and product offerings.. Team Management: A clear communicator who is opinionated and able to inspire and motivate colleagues. Can demonstrate examples of good collaboration and stakeholder management within a global context.. Technical Skills: Expertise in Windows Server operating systems, Ansible, Chef, configuration management, Jenkins, GitLab and coding languages (e.g. Ansible, PowerShell, Perl scripting).. Automation and IaC: Experience of working with Git-based repository management systems and automation tooling. Can demonstrate an engineering first mindset and provide examples of automation delivery that improve efficiency and customer value.. Project Management: Oversee the planning, execution, and delivery of Windows server engineering projects, ensuring they are completed on time and within budget.. Experience: Extensive experience in a Windows engineering role, proficiency in operating systems, automation, Ansible and IaC.. Certifications: Relevant certifications in Microsoft technologies, automation, and project management are a plus.. Some Other Highly Valued Skills May Include. Line Management experience, BeyondTrust EPM, Microsoft Clustering, Tanium, Chef, Bitbucket, GitLab and Microsoft Defender for Endpoint (MDE).. You may be assessed on the key critical skills relevant for success in role, such as risk and controls, change and transformation, business acumen strategic thinking and digital and technology, as well as job-specific technical skills.. This role is based in Pune.. Purpose of the role. To apply software engineering techniques, automation, and best practices in incident response, to ensure the reliability, availability, and scalability of the systems, platforms, and technology through them.. Accountabilities. Availability, performance, and scalability of systems and services through proactive monitoring, maintenance, and capacity planning.. Resolution, analysis and response to system outages and disruptions, and implement measures to prevent similar incidents from recurring.. Development of tools and scripts to automate operational processes, reducing manual workload, increasing efficiency, and improving system resilience.. Monitoring and optimisation of system performance and resource usage, identify and address bottlenecks, and implement best practices for performance tuning.. Collaboration with development teams to integrate best practices for reliability, scalability, and performance into the software development lifecycle, and work closely with other teams to ensure smooth and efficient operations.. Stay informed of industry technology trends and innovations, and actively contribute to the organization's technology communities to foster a culture of technical excellence and growth.. Assistant Vice President Expectations. To advise and influence decision making, contribute to policy development and take responsibility for operational effectiveness. Collaborate closely with other functions/ business divisions.. Lead a team performing complex tasks, using well developed professional knowledge and skills to deliver on work that impacts the whole business function. Set objectives and coach employees in pursuit of those objectives, appraisal of performance relative to objectives and determination of reward outcomes. If the position has leadership responsibilities, People Leaders are expected to demonstrate a clear set of leadership behaviours to create an environment for colleagues to thrive and deliver to a consistently excellent standard. The four LEAD behaviours are: L – Listen and be authentic, E – Energise and inspire, A – Align across the enterprise, D – Develop others.. OR for an individual contributor, they will lead collaborative assignments and guide team members through structured assignments, identify the need for the inclusion of other areas of specialisation to complete assignments. They will identify new directions for assignments and/ or projects, identifying a combination of cross functional methodologies or practices to meet required outcomes.. Consult on complex issues; providing advice to People Leaders to support the resolution of escalated issues.. Identify ways to mitigate risk and developing new policies/procedures in support of the control and governance agenda.. Take ownership for managing risk and strengthening controls in relation to the work done.. Perform work that is closely related to that of other areas, which requires understanding of how areas coordinate and contribute to the achievement of the objectives of the organisation sub-function.. Collaborate with other areas of work, for business aligned support areas to keep up to speed with business activity and the business strategy.. Engage in complex analysis of data from multiple sources of information, internal and external sources such as procedures and practises (in other areas, teams, companies, etc).to solve problems creatively and effectively.. Communicate complex information. 'Complex' information could include sensitive information or information that is difficult to communicate because of its content or its audience.. Influence or convince stakeholders to achieve outcomes.. All colleagues will be expected to demonstrate the Barclays Values of Respect, Integrity, Service, Excellence and Stewardship – our moral compass, helping us do what we believe is right. They will also be expected to demonstrate the Barclays Mindset – to Empower, Challenge and Drive – the operating manual for how we behave.. Show more Show less

Posted 1 month ago

Apply

5.0 - 7.0 years

10 - 19 Lacs

Hyderabad

Work from Office

• 7+ years of experience in cloud infrastructure engineering or SRE roles. • Deep expertise in automating infrastructure using modern DevOps and IaC practices. • Proficient in building and maintaining CI/CD pipelines. • Strong background in microservices architecture and Docker. • Mid-level experience supporting Java or .NET applications. • Hands-on experience with cloud platforms such as AWS, Azure, or GCP. • Strong knowledge of networking, load balancing, and cloud security best practices. • Excellent analytical, problem-solving, and communication skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 9 Lacs

Pune

Work from Office

We are seeking two experienced DevOps Engineers with 3-6 years of hands-on experience . Youll play a crucial role in managing and optimizing our infrastructure, leveraging AWS, Terraform, and Ansible. The ideal candidates will have a solid background in either Development or DevOps and are committed to maintaining hands-on expertise. Develop, deploy, and manage scalable cloud infrastructure on AWS. Implement Infrastructure as Code (IaC) using Terraform and manage configurations with Ansible. Automate CI/CD pipelines to improve deployment efficiency and minimize downtime. Monitor system health, troubleshoot issues, and optimize performance. Hands-on experience with AWS, Terraform, and Ansible is mandatory. Solid understanding of DevOps principles, best practices, and automation techniques. Strong background in either Development or DevOps, with a proven ability to troubleshoot and solve technical issues. Strong scripting skills in Python, Bash, or similar languages. Experience with containerization technologies like Docker and Kubernetes.

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 10+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

8.0 - 13.0 years

7 - 17 Lacs

Pune

Hybrid

Kubernetes and IT infrastructure . Solid understanding of Linux-based systems , container runtimes, and automation. CI/CD pipelines for OS images that are for use on both bare-metal AND virtual machine with identical codebase Independently creating hypervisor templates and applying them Unattended provisioning of Kubernate es clusters onto various hypervisor/cloud Experience with SUSE Rancher, Microsoft Azure Kubernetes Service (AKS), Linux (Ubuntu/SUSE), SUSE Longhorn, VMware vSphere Infrastructure-as-Code (IaC) tools (eg. Terraform/Terragrunt, Ansible, GitOps methodologies) Virtualization technologies and hosting platforms such as AzureRole & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Warm Greetings from SP Staffing!! Role: Azure Devops Experience Required :5 to 8 yrs Work Location :Hyderabad/Pune/Bangalore/Chennai Required Skills, Azure Devops Terraform Bash/Powershell/Python Interested candidates can send resumes to nandhini.spstaffing@gmail.com

Posted 1 month ago

Apply

8.0 - 13.0 years

36 - 42 Lacs

Pune

Work from Office

Hiring Kubernetes Infra Specialist to lead container platform development, CI/CD automation, and cluster management. Must have 8+ yrs in infra, strong Kubernetes, Rancher, Terraform, Azure, Linux, VMware, and OS image build experience

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Bengaluru

Work from Office

About this role: Wells Fargo is seeking a Lead Software Engineer within the Enterprise Application & Cloud Transformation team. In this role, you will: Lead complex technology Cloud initiatives including those that are companywide with broad impact. Act as a key contributor in automating the provisioning of Cloud Infrastructure using Infrastructure as a Code. Make decisions in developing standards and companywide best practices for engineering and large-scale technology solutions. Design, Optimization and Documentation of the Engineering aspects of the Cloud platform. Understanding of industry best practices and new technologies, influencing and leading technology team to meet deliverables and drive new initiatives. Review and analyze complex, large-scale technology solutions in Cloud for strategic business objectives and solving technical challenges that require in-depth evaluation of multiple parameters, including intangibles or unprecedented technical factors. Collaborate and consult with key technical experts, senior technology team, and external industry groups to resolve complex technical issues and achieve goals. Build and Enable cloud infrastructure, automate the orchestration of the entire GCP Cloud Platforms for Wells Fargo Enterprise. Working in a globally distributed team to provide innovative and robust Cloud centric solutions. Closely working with Product Team and Vendors to develop and deploy Cloud services to meet customer expectations. Required Qualifications: 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education 3+ years working with GCP and a proven track record of building complex infrastructure programmatically with IaC tools. Must have 2+ years of hands-on experience with Infrastructure as Code tool Terraform and GitHub. Must have professional cloud certification on GCP. Proficient on container-based solution services, have handled at least 2-3 large scale Kubernetes based Infrastructure build out, provisioning of services GCP. Exposure to services like GKE, Cloud functions, Cloud Run, Cloud Build, Artifactory etc. Infrastructure and automation technologies: Orchestration, Harness, Terraform, Service Mesh, Kubernetes, API development, Test Driven Development Sound knowledge on the following areas with an expertise on one of them - 1. Proficient and have a thorough understanding of Cloud service offerings on Storage and Database. 2. Should have good understanding of networking, firewalls, load balancing concepts (IP, DNS, Guardrails, Vnets) and exposure to cloud security, AD, authentication methods, RBAC. 3. Proficient and have a thorough understanding of Cloud service offerings on Data, Analytics, AI/ML. Exposure to Analytics AIML services like BigQuery, Vertex AI, Data Proc etc. 4. Proficient and have a thorough understanding of Cloud service offerings on Security, Data Protection and Security policy implementations. Thorough understanding of landing zone and networking, Security best practices, Monitoring and logging, Risk and controls. Should have good understanding on Control Plane, Azure Arc and Google Anthos. Experience working in Agile environment and product backlog grooming against ongoing engineering work Enterprise Change Management and change control, experience working within procedural and process driven environment Desired Qualifications: Should have exposure to Cloud governance and logging/monitoring tools. Experience with Agile, CI/CD, DevOps concepts and SRE principles. Experience in scripting (Shell, Python, Go) Excellent verbal, written, and interpersonal communication skills. Ability to articulate technical solutions to both technical and business audiences Ability to deliver & engage with partners effectively in a multi-cultural environment by demonstrating co-ownership & accountability in a matrix structure. Delivery focus and willingness to work in a fast-paced, enterprise environment.

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Back end deveopment with GoLang Expertise in Kubernetes/OpenShift, Coud service providers. Knowedge of Generative AI, and abiity to integrate AI to appications. Abiity to pick up new areas based on business requirements Exceent communication skis Required education Bacheor's Degree Required technica and professiona expertise 6+ years of overa experience in backend deveopment. Exceent understanding of system design and best practices. 6+ years of appication deveopment with GoLang deveopment. Good eve of expertise in Kubernetes or OpenShift, use of Docker/Podman and Coud service providers. Good eve of knowedge of CNI, container native storage. Expertise in Version Contro - Git Experience using coud technoogies (AWS/GCP/Azure/IBM Coud) Experience with Ansibe and She scripting Proficient in Linux administration. Experience of IaC (Terraform) Expertise in Version Contro - Git Design functiona DevOps appication ifecyce. Good understanding of CICD pipeines such as Jenkins. Shoud have hands-on in writing and debugging Jenkinsfie Experience using buid toos such as Maven, Grade, Make, Ant Knowedge of AI - Pytorch, TensorFow, Scikit, Generative AI, and abiity to integrate AI functionaities to appications.

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Were building the technological foundation for our companys Semantic Layera common data language powered by Anzo / Altair Graph Studio. As a Senior Software Engineer, youll play a critical role in setting up and managing this platform on AWS EKS, enabling scalable, secure, and governed access to knowledge graphs, parallel processing engines, and ontologies across multiple domains, including highly sensitive ones like clinical trials. Youll help design and implement a multi-tenant, cost-aware, access-controlled infrastructure that supports internal data product teams in securely building and using connected knowledge graphs. Key Responsibilities Implement a Semantic Layer for on Anzo / Altair Graph Studio and Anzo Graph Lakehouse in a Kubernetes or ECS environment (EKS / ECS) Develop and manage Infrastructure as Code using Terraform and configuration management via Ansible Integrate platform authentication and authorization with Microsoft Entra ID (Azure AD) Design and implement multi-tenant infrastructure patterns that ensure domain-level isolation and secure data access Build mechanisms for cost attribution and usage visibility per domain and use case team Implement fine-grained access control, data governance, and monitoring for domains with varying sensitivity (e.g., clinical trials) Automate deployment pipelines and environment provisioning for dev, test, and production environments Collaborate with platform architects, domain engineers, and data governance teams to curate and standardize ontologies Minimum Requirements 4 - 9 years of experience in Software / Platform Engineering, DevOps, or Cloud Infrastructure roles Proficiency in Python for automation, tooling, or API integration Hands-on experience with AWS EKS / ECS and associated services (IAM, S3, CloudWatch, etc.) Strong skills in Terraform / Ansible / IaC for infrastructure provisioning and configuration Familiarity with RBAC, OIDC, and Microsoft Entra ID integration for enterprise IAM Understanding of Kubernetes multi-tenancy and security best practices Experience building secure and scalable platforms supporting multiple teams or domains Preferred Qualifications Experience deploying or managing Anzo, Altair Graph Studio, or other knowledge graph / semantic layer tools Familiarity with RDF, SPARQL, or ontologies in an enterprise context Knowledge of data governance, metadata management, or compliance frameworks Exposure to cost management tools like AWS Cost Explorer / Kubecost or custom chargeback systems Why Join Us Be part of a cutting-edge initiative shaping enterprise-wide data access and semantics Work in a cross-functional, highly collaborative team focused on responsible innovation Influence the architecture and strategy of a foundational platform from the ground up

Posted 1 month ago

Apply

5.0 - 6.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 6+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Experience on ITSM processes like Incident, change and problem management Escalation processes. Documentation of technical runbooks and RCA. Working knowledge on JIRA / Service Now / Remedy. 3-5 years of professional experience working on Azure cloud Platform (any public cloud is also ok but microservice based application experience is must). Production experience of containers and container orchestration tools (Docker / Kubernetes / Helm / FluxCD). Experience in working with enterprise scale applications, API Gateways, high availability architectures, load balancing & disaster recovery. Knowledge of application and cloud security (HTTP, TLS, certificate management). Experience of setting up CI/CD pipelines including production deployments using github (any other CICD tool like Azure DevOps , CircleCI, Jenkins etc. should have process knowledge and should be able to correlate on other platforms or tools). Good troubleshooting skills, approach to drill down to the cause ASAP. Experience building and working with logging and metrics solutions using Grafana, Prometheus, azure app insights (any other tool like Azure Monitor, Dashboards, Workbooks etc is also fine but should be able to switch and contribute). Provisioning/ Enablement/ Support to various Azure Services covering Compute, Storage, Networks, Integration services. Experience on IaC Automation using PowerShell, ARM templates. Demonstrate maturity on modern infrastructure practices. Fair / basic understanding (preferably experienced on debugging applications in at least any coding languages (e.g. Java/Python/.Net/NodeJS/Go). Knowledge of at least one package/dependency management system (e.g. Maven / Gradle / NPM / Composer / Yarn). In depth experience of Git including development and feature based workflows and at least one Git solution (e.g.Github / Gitlab / Gerrit / Bitbucket).

Posted 1 month ago

Apply

3.0 - 5.0 years

3 - 5 Lacs

Delhi, India

On-site

Experience on ITSM processes like Incident, change and problem management Escalation processes. Documentation of technical runbooks and RCA. Working knowledge on JIRA / Service Now / Remedy. 3-5 years of professional experience working on Azure cloud Platform (any public cloud is also ok but microservice based application experience is must). Production experience of containers and container orchestration tools (Docker / Kubernetes / Helm / FluxCD). Experience in working with enterprise scale applications, API Gateways, high availability architectures, load balancing & disaster recovery. Knowledge of application and cloud security (HTTP, TLS, certificate management). Experience of setting up CI/CD pipelines including production deployments using github (any other CICD tool like Azure DevOps , CircleCI, Jenkins etc. should have process knowledge and should be able to correlate on other platforms or tools). Good troubleshooting skills, approach to drill down to the cause ASAP. Experience building and working with logging and metrics solutions using Grafana, Prometheus, azure app insights (any other tool like Azure Monitor, Dashboards, Workbooks etc is also fine but should be able to switch and contribute). Provisioning/ Enablement/ Support to various Azure Services covering Compute, Storage, Networks, Integration services. Experience on IaC Automation using PowerShell, ARM templates. Demonstrate maturity on modern infrastructure practices. Fair / basic understanding (preferably experienced on debugging applications in at least any coding languages (e.g. Java/Python/.Net/NodeJS/Go). Knowledge of at least one package/dependency management system (e.g. Maven / Gradle / NPM / Composer / Yarn). In depth experience of Git including development and feature based workflows and at least one Git solution (e.g.Github / Gitlab / Gerrit / Bitbucket).

Posted 1 month ago

Apply

8.0 - 15.0 years

8 - 15 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Comprehend and operate within a deployed SaaS environment, grasping custom-built services, application data flow, and system requirements Automate the provisioning of cloud resources using Infrastructure as Code (IaC) tools such as Azure Resource Manager, Azure Runbooks, or Terraform Configure and manage cloud services, including virtual machines, serverless functions, databases, storage, networking, and monitoring/alerting systems Develop and maintain scripts and automation tools using languages like PowerShell to streamline cloud operations Collaborate with development and Cloud Operations teams to ensure the smooth deployment and integration of applications in the cloud environment Monitor and optimize cloud resource utilization and performance Implement and uphold cloud security best practices, including access controls, data encryption, and adherence to industry standards and regulations Troubleshoot and resolve cloud infrastructure issues, utilizing log analytics data, and working in close coordination with relevant engineering departments Be willing to provide on-call duty Qualifications: Only candidates with extensive hands-on experience across the full spectrum of Microsoft Azure components will be considered for this position In-depth knowledge of maintaining and securing highly available cloud infrastructure solutions, using Microsoft Azure services, is essential Proficient in containerization and orchestration, including Azure Kubernetes Service (AKS) Proficiency in Microsoft Windows Server administration, Active Directory, Identity Management, Microsoft IIS, and PowerShell scripting is mandatory A solid understanding of cryptographic technologies (TLS, IPsec, TDE) and familiarity with enterprise EDR/VMDR systems are required Practical knowledge of network administration (TCP/IP, DNS, VLAN, NSG, WAF, L4/L7 load balancing, S2S/P2S VPN, etc.) is necessary Experience in managing multi-tenant SaaS applications is highly desirable Knowledge of penetration testing, vulnerability scanning, and SIEM/CM systems is strongly preferred An understanding of SDLC, CI/CD, Azure DevOps, and release management is advantageous Relevant Microsoft certifications are beneficial but not mandatory The ability to work in a rapidly evolving environment, effectively multitask, and balance multiple priorities while maintaining a high level of customer satisfaction is required

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Job Summary: We are looking for a Billing Operations Engineer who will be responsible for designing, developing, and maintaining automations for chargebacks, billing, IAM, and related workflows . The role involves integrating with enterprise systems, securing and streamlining financial operations, and developing custom solutions to handle large-scale billing and authorization tasks efficiently. Key Responsibilities: Automate chargeback and billing processes to streamline financial operations and reduce manual intervention. Develop and implement IAM automation workflows to manage roles, permissions, and policies across platforms. Integrate with third-party systems, payments, and enterprise platforms through APIs and custom adapters. Develop scripts, modules, and services to handle routine tasks, reconciliation, and policy enforcement. Utilize Infrastructure as Code (IAC) and scripting to manage and deploy components efficiently. Provide scalable solutions for billing reports, usage data, pricing, and reconciliation . Monitor and troubleshoot automated workflows and resolve issues promptly. Support internal teams by developing self-service tools, CLI, or UI components for financial operations and IAM. Prepare and maintain technical and operational documentation. Collaborate with stakeholders (finance, operations, and security teams) to gather requirements and implement solutions. Required Skills & Experience: Experience designing and developing automations for financial operations, chargebacks, or billing . Proficiency in Python, Shell scripting, or other scripting languages . Familiar with IAM concepts , roles, policies, and permissions (RBAC, AWS IAM, Azure AD, or GCP IAM). Experience integrating with third-party services and APIs . Familiar with Infrastructure as Code (Terraform, CloudFormation) and automation tools (Ansible, Chef, or Puppet). Ability to develop and debug API handlers, webhook processors, and custom adapters . Familiar with financial workflows, pricing models, reconciliation, and invoicing. Experience with cloud-native services (AWS, Azure, GCP) is a plus. Strong problem-solving skills and a passion for automation. Preferred Qualifications: Experience with billing platforms, payments processors, or financial reconciliation . Familiar with scripting against IAM policy documents and roles . Experience designing self-service portals or automation UI for internal stakeholders. Familiar with Docker, Kubernetes, and microservice architecture . Collaborative mindset, strong communication skills, and ability to gather requirements from stakeholders.

Posted 1 month ago

Apply

7.0 - 12.0 years

40 - 50 Lacs

Gurugram, Bengaluru

Hybrid

Our team of 250 colleagues is at the heart of The Economist Groups digital-first agenda. Together, we are delivering user-friendly, high-quality digital products that bring data, intelligence and analysis to our growing global audiences across all our four businesses. Our team develops cutting-edge products that provide valuable insights and analysis to business and government leaders worldwide. Whether it involves creating new products for the Economist Intelligence Unit, conceptualising mobile applications that deliver personalised content, or enhancing our flagship website, economist.com, the Technology team plays a crucial role in shaping how we attract, convert, engage, and retain our 1.2 million subscribers. The Technology team collaborates closely with other teams to engineer innovative solutions that cater to the evolving needs of our subscribers. This includes leveraging advanced technologies and methodologies to enhance the acquisition of new subscribers, optimize conversion rates, deliver engaging user experiences, and ensure long-term subscriber loyalty. The role: We are recruiting an experienced Staff Site Reliability Engineer to join our newly established TechOps division within the Technology department. We maintain the systems that keep our products running smoothly around the world, 24x7 - supporting everything from cloud infrastructure and CI/CD pipelines to observability and incident response. How you will contribute in this role: Define and implement best practices for system reliability, observability, monitoring, and alerting. Build and manage automation for our AWS cloud based services, and SaaS stack. Continuously reduce operational toil. Drive end-to-end observability across our web and mobile applications, cloud infrastructure, firewalls and CDNs. Diagnose infrastructure failures, performance bottlenecks, and production issues through strong debugging skills. Work closely with Service Delivery Managers to drive incident management processes, including postmortems and root cause analysis, and with application teams, and platform engineers to improve reliability and performance. Participate in on-call rotations, ensuring rapid incident response across our stack. Take ownership of SLAs/SLOs/SLIs and commit to continuous improvement of service levels across all platforms. Improve system resilience and minimize MTTR (mean time to recovery) through incident response automation. What we’re looking for: 10 years of professional experience as a Site Reliability Engineer or in a Cloud Operations/DevOps role. 5+ years in a production environment supporting large-scale, mission-critical applications - including web, mobile, and e-commerce/payment applications. Proficient in one or more programming/scripting languages (e.g., Python, Golang, Typescript). In-depth knowledge of observability tools (e.g., New Relic, Prometheus, Grafana ). Professional experience in cloud platforms (AWS strongly preferred), such as serverless functions, API gateway, relational and NoSQL databases, and caching. Strong experience with container orchestration ( ECS, Kubernetes), CI/CD pipelines, and infrastructure-as-code (AWS CDK, Terraform, Pulumi, etc.). An advanced degree in software / data engineering, computer / information science, or a related quantitative field or equivalent work experience. Strong verbal and written communication skills and ability to work well with a wide range of stakeholders. Strong ownership, scrappy and biased for action. Preferred Qualifications: Experience with chaos engineering and game days. Background in security and compliance (SOC 2, ISO 27001, etc.). Contributions to open-source SRE tools or community involvement. Benefits We offer excellent benefits including an incentive scheme, generous annual and parental leave policies, volunteering days and well-being support throughout the year, as well as free access to all Economist content. Country specific benefits are also offered. Our Values Our values are a collective set of beliefs and behaviours that strengthen The Economist Group's purpose and demonstrate where we want to be as an organisation. They reflect on our mission to pursue progress for individuals, organisations and the world. Independence We are not bound to any party or interest and encourage exploration and free-thinking. We champion freedom, both within our organisation and around the world. Integrity We are bold in our efforts to uncover the truth and stand up for what we believe in. We inspire trust through our rigour, fact-checking and transparency. Excellence We aspire to the highest standards in all we do. We are ambitious and inquisitive in our pursuit of continuous progress and innovation. Inclusivity We value diversity in thought and background and encourage healthy debate with a breadth of perspectives. We treat our colleagues and customers fairly and respectfully. Openness We foster a collaborative and empathetic culture conducive to the interests, wit and initiative of our colleagues. New ideas are our lifeblood. The Economist Group values diversity. We are committed to equal opportunities and creating an inclusive environment for all our colleagues and potential colleagues regardless of ethnic origin, national origin, gender, gender identity, race, colour, religious beliefs, disability, sexual orientation, age, marital status or any other status. #LI-Hybrid

Posted 1 month ago

Apply

4.0 - 6.0 years

8 - 14 Lacs

Gurugram

Hybrid

JD for Azure Cloud Engineer We are seeking a skilled & passionate azure cloud engineer having 4-5 years of experience, who would be responsible for designing, implementing, and maintaining secure and scalable cloud infrastructures, also adept at deploying and managing network solutions in cloud environments, Key Responsibilities System Administration Administered and managed users, roups, and roles for seamless identity and access management (IAM) Provisioned, configured, and monitored Azure virtual machines , storage , and networking components . Familiar with backup & OS patching in Azure Infrastructure Stay updated with the latest Azure roles and responsibilities and recommend their implementation to improve efficiency. Integrate applications with Azure databases, storage, and other services Networking concepts Designed and implemented VNETs, VNET Peering , Express Route , VNG, Service Endpoints and Private Endpoints , VPN, Application gateways, Load Balancers in Azure environments. Configured Azure Load Balancer and Traffic Manager to optimize traffic distribution Implemented network security groups ( NSGs ) and Azure Firewall Automation Develop infrastructure as code (IaC) leveraging cloud native tooling to ensure automated and consistent platform deployments Familiar with Deploying Infrastructure using Terraform and Azure DevOps & source code management using GIT. Monitoring Monitored and optimized application performance in Azure, preferably using native technologies i.e. Azure Monitor and Azure Log Analytics About IT Convergence IT Convergence Professional Services Pvt. Ltd. is a global Oracle Platinum Partner with a comprehensive service offering across all three pillars of the Cloud (IaaS, PaaS, SaaS), including Consulting/Advisory, Private Cloud (Hosting), Managed Services, Integration, Business Intelligence/Analytics, Development, Testing, Training, and Change Management services. Weve created value for over 1,100 customers globally, including 1/3rd of Fortune 500 companies. India | USA | Canada | Mexico | Argentina | Brazil | China |

Posted 1 month ago

Apply

5.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Job Summary: We are seeking a highly skilled and experienced Senior Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS). Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability. Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments. Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments. Support deployment of infrastructure lambda functions. Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment. Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads. Troubleshoot and resolve complex infrastructure issues. Qualifications: 6+ years of experience in Infrastructure Engineering or similar role. Extensive experience with Amazon Web Services (AWS). Proven ability to architect for scale, availability, and high-performance workloads. Deep knowledge of Infrastructure as Code (IaC) with Terraform. Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD). Solid understanding of git, branching models, CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical mentoring, team-forming and fostering self-organization and ownership. Experience with client relationship management and project planning. Certifications: Relevant certifications (e.g., Kubernetes Certified Administrator, AWS Certified Machine Learning Engineer - Associate, AWS Certified Data Engineer - Associate, AWS Certified Developer - Associate, etc.). Software development experience (e.g., Terraform, Python). Experience/Exposure with machine learning infrastructure. Education: B.Tech/BE in computer sciences, a related field or equivalent experience.

Posted 1 month ago

Apply

5.0 - 8.0 years

11 - 12 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 5 to 8+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm

Posted 1 month ago

Apply

4.0 - 9.0 years

9 - 19 Lacs

Noida, Hyderabad

Work from Office

Job Title: Sr. RPA & Automation Engineer Job Summary: We are seeking a highly experienced and versatile RPA & Automation Engineer with over 7 years of progressive experience to drive our automation initiatives. The ideal candidate will possess deep expertise in Robotic Process Automation (RPA) platforms like UiPath and Microsoft Power Automate, coupled with a strong background in infrastructure automation using scripting and orchestration tools. This role requires a blend of technical prowess, architectural understanding, and leadership capabilities to design, develop, implement, and manage robust and scalable automation solutions across our enterprise. Responsibilities: RPA Development & Leadership (60%): Designing, development, testing, and deployment of complex RPA solutions using UiPath and Microsoft Power Automate. Drive the adoption of best practices, coding standards, and reusability within the RPA development lifecycle. Conduct feasibility studies, process analysis, and solution design for new automation opportunities. Mentor and provide technical guidance to junior RPA developers, fostering a culture of continuous learning and improvement. Collaborate with business stakeholders to understand requirements, define scope, and ensure delivered solutions meet business needs. Manage and prioritize a pipeline of automation projects, ensuring timely delivery within budget and quality standards. Oversee the maintenance, monitoring, and optimization of existing RPA automations, troubleshooting issues and implementing enhancements. Infrastructure Automation & Scripting (20%): Design and implement infrastructure automation solutions using scripting languages (e.g., PowerShell, Python, Bash) to automate routine IT operations, provisioning, and configuration management. Integrate RPA solutions with IT infrastructure components and systems using APIs and various automation techniques. Utilize orchestration tools (e.g., Azure DevOps, Jenkins, Ansible, Terraform) to build and manage automated deployment pipelines for both RPA and infrastructure components. Ensure the security, scalability, and maintainability of automated infrastructure processes. Architectural & Strategic Contribution (20%): Contribute to the development and evolution of the enterprise automation strategy and roadmap. Evaluate new technologies and tools in the automation space to recommend adoption where appropriate. Define and enforce governance models for automation development and deployment. Collaborate with IT Operations, Cybersecurity, and other relevant teams to ensure seamless integration and operational stability of automation solutions. Participate in architectural reviews and provide expert guidance on solution design. Required Skills & Qualifications: Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. 7+ years of hands-on experience in Robotic Process Automation (RPA) development and implementation. Deep expertise in UiPath, including advanced knowledge of UiPath Orchestrator, Studio, Activities, and REFramework. Extensive experience with Microsoft Power Automate (Desktop flows, Cloud flows, UI flows, and Power Automate Portal). Proven experience in designing and implementing infrastructure automation using scripting languages (e.g., PowerShell, Python, Bash). Strong understanding and practical experience with orchestration tools (e.g., Azure DevOps, Jenkins, Ansible, Terraform). Experience with API integrations (REST, SOAP) for connecting RPA and automation solutions with various enterprise systems. Solid understanding of IT infrastructure concepts (servers, networks, databases, cloud platforms - Azure/AWS/GCP). Familiarity with version control systems (e.g., Git). Excellent problem-solving, analytical, and critical thinking skills. Strong communication1 (written and verbal) and interpersonal skills with the ability to effectively collaborate with technical and non-technical stakeholders. Ability to work independently2 and as part of a team in a fast-paced,3 dynamic environment. Demonstrated leadership capabilities and experience mentoring junior team members. Preferred Skills (Bonus Points): Relevant certifications in UiPath (e.g., UiPath Advanced RPA Developer), Microsoft Power Automate, or cloud platforms. Experience with other RPA tools (e.g., Automation Anywhere, Blue Prism). Knowledge of artificial intelligence (AI) and machine learning (ML) concepts and their application in automation. Experience with process mining tools. Agile/Scrum development methodology experience.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies