Home
Jobs
Companies
Resume

1191 Vpc Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Hyderābād

On-site

Summary Location: Hyderabad To work in AWS platform managing SAP workloads and develop automation scripts using AWS services. Support 24*7 environment and be ready to learn newer technologies. About the Role Major Accountabilities Solve incidents and perform changes in AWS Cloud environment. Own and drive incidents and its resolution. Must have extensive knowledge in workings of performance troubleshooting and capacity management. Champion the standardization and simplification of AWS Operations involving various services including S3, EC2, EBS, Lamda, Network, NACL, Security Groups and others. Prepare and run internal AWS projects, identify critical integration points and dependencies, propose solutions for key gaps, provide effort estimations while ensuring alignment with business and other teams Assure consistency and traceability between user requirements, functional specifications, Agile ways of working and adapting to DevSecOps, architectural roadmaps, regulatory/control requirements, and smooth transition of solutions to operations Deliver assigned project work as per agreed timeline within budget and on-quality adhering to following the release calendars Able to work in dynamic environment and supporting users across the globe. Should be a team player. Weekend on-call duties would be applicable as needed. Minimum Requirements Bachelor’s degree in business/technical domains AWS Cloud certifications / trainings. Able to handle OS security vulnerabilities and administer the patches and upgrades > 5 years of relevant professional IT experience in the related technical area Proven experience in handling AWS Cloud workload, preparing Terraform scripts and running pipelines. Excellent troubleshooting skills and be independently able to solve P1/P2 incidents. Have working knowledge on of DR, Cluster, SuSe Linux and tools associated within AWS ecosystem Knowledge of handling SAP workloads would be added advantage. Extensive monitoring experience and should have worked in 24*7 environment in the past Experience with installaing and setting up SAP environment in AWS Cloud. EC2 Instance setup, EBS and EFS Setup, S3 configuration Alert Configuration in Cloud Watch. Management of extending filesystems and adding new HANA instance. Capacity / Consumption Management, Manage AWS Cloud accounts along with VPC, Subnets and NAT Good knowledge on NACL and Security Groups,Usage of Cloud Formation and automation piplelines,Identify and Access Management. Create and manage Multi-Factor Authentication Good understanding of ITIL v4 principles and able to work on complex 24*7 environment. Proven track record of broad industry experience and excellent understanding of complex enterprise IT landscapes and relationships Why consider Novartis? Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn more here: https://www.novartis.com/about/strategy/people-and-culture Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No

Posted 11 hours ago

Apply

8.0 years

28 - 30 Lacs

Hyderābād

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 11 hours ago

Apply

8.0 years

28 - 30 Lacs

Pune

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 11 hours ago

Apply

5.0 years

2 - 5 Lacs

Mumbai

On-site

Company Description Quantanite is a customer experience (CX)solutions company that helpsfast-growing companies and leading global brandsto transformand grow. We do thisthrough a collaborative and consultative approach,rethinking business processes and ensuring our clients employ the optimalmix of automationand human intelligence.We are an ambitiousteamof professionals spread acrossfour continents and looking to disrupt ourindustry by delivering seamless customerexperiencesforour clients,backed-upwithexceptionalresults.We havebig dreams, and are constantly looking for new colleaguesto join us who share our values, passion and appreciationfordiversity. Job Description About the Role As a DevOps Engineer you will work closely with our global teams to learn about the business and technical requirements and formulate the necessary infrastructure and resource plans to properly support the growth and maintainability of various systems. Key Responsibilities Implement a diverse set of development, testing, and automation tools, as well as manage IT infrastructure. Plan the team structure and activities, and actively participate in project management. Comprehend customer requirements and project Key Performance Indicators (KPIs). Manage stakeholders and handle external interfaces effectively. Set up essential tools and infrastructure to support project development. Define and establish DevOps processes for development, testing, release, updates, and support. Possess the technical expertise to review, verify, and validate software code developed in the project. Engage in software engineering tasks, including designing and developing systems to enhance reliability, scalability, and operational efficiency through automation. Collaborate closely with agile teams to ensure they have the necessary tools for seamless code writing, testing, and deployment, promoting satisfaction among development and QA teams. Monitor processes throughout their lifecycle, ensuring adherence, identifying areas for improvement, and minimizing wastage. Advocate and implement automated processes whenever feasible. Identify and deploy cybersecurity measures by continuously performing vulnerability assessments and managing risk. Handle incident management and conduct root cause analysis for continuous improvement. Coordinate and communicate effectively within the team and with customers. Build and maintain continuous integration (CI) and continuous deployment (CD) environments, along with associated processes and tools. Qualifications About the Candidate Proven 5 years of experience with Linux based infrastructure and proficient in scripting language. Must have solid cloud computing skills such as network management, cloud computing and cloud databases in any one of the public clouds (AWS, Azure or GCP) Must have hands-on experience in setting up and managing cloud infrastructure like Kubernetes, VPC, VPN, Virtual Machines, Cloud Databases etc. Experience in IAC (Infrastructure as Code) tools like Ansible, Terraform. Must have hands-on experience in coding and scripting in at least one of the following: Shell, Python, Groovy Experience as a DevOps Engineer or similar software engineering role. Experienced in establishing an optimized CI / CD environment relevant to the project. Automation using scripting language like Perl/python and shell scripts like BASH and CSH. Good knowledge of configuration and building tools like Bazel, Jenkins etc. Good knowledge of repository management tools like Git, Bit Bucket etc. Good knowledge of monitoring solutions and generating insights for reporting Excellent debugging skills/strategies. Excellent communication skills. Experienced in working in an Agile environment. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development At Quantanite, youʼll have a personal development plan to help you improve in the areas youʼre looking to develop in over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. Youʼll also have the opportunity to progress internally. As a fast growing organisation, our teams are growing, and youʼll have the chance to take on more responsibility over time. So, if youʼre looking for a career full of purpose and potential, weʼd love to hear from you!

Posted 11 hours ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description: We are seeking a highly skilled and motivated Google Cloud Engineer to join our dynamic engineering team. In this role, you will be instrumental in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). You will work closely with development, operations, and security teams to ensure our cloud environment is scalable, secure, highly available, and cost-optimized. If you are passionate about cloud native technologies, automation, and solving complex infrastructure challenges, we encourage you to apply.. What You Will Do Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) tools like Terraform. Deploy, configure, and manage core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components (VPC, Cloud Load Balancing, Cloud CDN). Develop and maintain CI/CD pipelines for automated deployment and release management using tools like Cloud Build, GitLab CI/CD, GitHub Actions or Jenkins. Implement and enforce security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. Monitor cloud infrastructure and application performance, identify bottlenecks, and implement solutions for optimization and reliability. Troubleshoot and resolve complex infrastructure and application issues in production and non-production environments. Collaborate with development teams to ensure applications are designed for cloud-native deployment, scalability, and resilience. Participate in on-call rotations for critical incident response and provide timely resolution to production issues. Create and maintain comprehensive documentation for cloud architecture, configurations, and operational procedures. Stay current with new GCP services, features, and industry best practices, proposing and implementing improvements as appropriate. Contribute to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. What Experience You Need Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 6+ years of hands-on experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. 3+ years of experience with cloud platforms (GCP preferred), including designing and deploying cloud-native applications. 3+ years of experience with source code management (Git), CI/CD pipelines, and Infrastructure as Code. Strong experience with Javascript and a modern Javascript framework, VueJS preferred. Proven ability to lead and mentor development teams. Strong understanding of microservices architecture and serverless computing. Experience with relational databases (SQL Server, PostgreSQL). Excellent problem-solving, analytical, and communication skills. Experience working in Agile/Scrum environments. What Could Set You Apart GCP Cloud Certification. UI development experience (e.g., HTML, JavaScript, Angular, Bootstrap) Experience in Agile environments (e.g., Scrum, XP) Relational database experience (e.g., SQL Server, PostgreSQL) Experience with Atlassian tooling (e.g., JIRA, Confluence, and Github) Working knowledge of Python Excellent problem-solving and analytical skills and the ability to work well in a team Show more Show less

Posted 11 hours ago

Apply

8.0 years

28 - 30 Lacs

Chennai

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 11 hours ago

Apply

0 years

0 - 0 Lacs

Coimbatore

On-site

We are seeking AWS Cloud DEVOPS Engineers, who will be part of the Engineering team and collaborating with software development, quality assurance, and IT operations teams to deploy and maintain production systems in the cloud. This role requires a engineer who is passionate about provisioning and maintaining a reliable, secure, and scalable production systems. We are a small team of highly skilled engineers and looking forward to adding a new member who wishes to advance in one's career by continuous learning. Selected candidates will be an integral part of a team of passionate and enthusiastic IT professionals, and have tremendous opportunities to contribute to the success of the products. What you will do Ideal candidate will be responsible for Deploying, automating, maintaining, managing and monitoring an AWS production system including software applications and cloud-based infrastructure Monitor system performance and troubleshoot issues Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS ) Use DevOps principles and methodologies to enable the rapid deployment of software and services by coordinating software development, quality assurance, and IT operations Making sure AWS production systems are reliable, secure, and scalable Create and enforce policies related to AWS usage including sample tagging, instance type usage, data storage Resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Automating different operational processes by designing, maintaining, and managing tools Provide primary operational support and engineering for all Cloud and Enterprise deployments Lead the organisations platform security efforts by collaborating with the core engineering team Design, build, and maintain containerization using Docker, and manage container orchestration with Kubernetes Set up monitoring, alerting, and logging tools (e.g., zabbix) to ensure system reliability. Collaborate with development, QA, and operations teams to design and implement CI/CD pipelines with Jenkins Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Automate and optimize infrastructure tasks using tools like Terraform, Ansible, or CloudFormation. Support InfoSec scans and compliance audits Ensure security best practices in the cloud environment, including IAM management, security groups, and network firewalls. Contribute sto the optimization of system performance and cost. Promotes knowledge sharing activities within and across different product teams by creating and engaging in communities of practice and through documentation, training, and mentoring Keep skills up to date through ongoing self-directed training What skills are required Ability to learn new technologies quickly. Ability to work both independently and in collaborative teams to communicate design and build ideas effectively. Problem-solving, and critical-thinking skills including ability to organize, analyze, interpret, and disseminate information. Excellent spoken and written communication skills Must be able to work as part of a diverse team, as well as independently Ability to follow departmental and organizational processes and meet established goals and deadlines Knowledge of EC2 (Auto scaling, Security Groups ),VPC,SQS, SNS,Route53,RDS, S3, Elastic Cache, IAM, CLI Server setup/configuration (Tomcat,ngnix ) Experience with AWS—including EC2, S3, CloudTrail, and APIs Solid understanding of EC2 On-Demand, Spot Market, and Reserved Instances knowledge of Infrastructure As Code tools including Terraform, Ansible, or CloudFormation Knowledge of scripting and automation using Python, Bash, Perl to automate AWS tasks Knowledge of code deployment tools Ansible and CloudFormation scripts Support InfoSec scans and compliance audits Basic knowledge of network architecture, DNS, and load balancing. knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes Understanding of monitoring and logging tools(e.g. zabbix) Familiarity with version control systems (GIT) Knowledge of microservices architecture and deployment. Bachelor's degree in Engineering or Masters degree in computer science. Note : Candidates who have passed out in the year 2023 or 2024 can only apply for this Internship. This is Internship to Hire position and Candidates who complete the internship will be offered full-time position based on performance Job Types: Full-time, Permanent, Fresher, Internship Contract length: 6 months Pay: ₹5,500.00 - ₹7,000.00 per month Schedule: Day shift Monday to Friday Morning shift Expected Start Date: 01/07/2025

Posted 11 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 11 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 11 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 11 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Us: People Tech Group is a leading Enterprise Solutions, Digital Transformation, Data Intelligence, and Modern Operation services provider. We have started in the year 2006 at Redmond, Washington, USA and got expanded to India, and In India, we are based out of Hyderabad, Bangalore, Pune and Chennai with overall strength of 1500+ employees. We have our presence over 4 different countries US/Canada/India /Costa Rica. One of the Recent Development happened with the company, we have got acquired by Quest Global Company, Quest Global is One of the world's largest engineering Solution provider Company, it has 20,000+ employee strength, with 70+ Global Delivery Service centers, Headquarters are based in Singapore. Going forward, we all are part of Quest Global Company. Position: DevOps Engineer Company: People Tech Group Experience: 5 yrs Location: Bengaluru Job Description: Key Responsibilities: Provisioned and secured cloud infrastructure using Terraform/ AWS CloudFormation Fully automated GitLab CI/CD pipelines for application builds, tests, and deployment, integrated with Docker containers and AWS ECS/EKS Continuous integration workflows with automated security checks, testing, and performance validation A self-service developer portal providing access to system health, deployment status, logs, and documentation for seamless developer experience AWS CloudWatch Dashboards and CloudWatch Alarms for real-time monitoring of system health, performance, and availability Centralized logging via CloudWatch Logs for application performance and troubleshooting Complete documentation for all automated systems, infrastructure code, CI/CD pipelines, and monitoring setups Monitoring - Splunk - Ability to create dashboards, alerts, integrating with tools like MS teams. Required Skills: Master's or bachelor's degree in computer science/IT or equivalent Expertise in Shell scripting Familiarity with Operating system - Windows & linux Experience in Git - version control Ansible - Good to have Familiarity with CI/CD pipelines - GitLab Docker, Kubernetes, OpenShift - Strong in Kubernetes administration Experience in Infra As Code – Terraform & AWS - CloudFormation Familiarity in AWS services like EC2, Lambda, Fargate, VPC, S3, ECS, EKS Nice to have – Familiarity with observability and monitoring tools like Open Telemetry setup, Grafana, ELK stack, Prometheus Show more Show less

Posted 12 hours ago

Apply

7.0 - 15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role: Senior Cloud DevOps Engineer Experience: 7-15 years Notice Period: Immediate to 15 days Location: Hyderabad We are seeking a highly skilled GCP DevOps Engineer to join our dynamic team. Job Description Deep GCP Services Mastery: Profound understanding and hands-on experience with core GCP services (Compute Engine, CloudRun, Cloud Storage, VPC, IAM, Cloud SQL, BigQuery, Cloud Operations Suite). Infrastructure as Code (IaC) & Configuration Management: Expertise in Terraform for GCP, and proficiency with tools like Ansible for automating infrastructure provisioning and management. CI/CD Pipeline Design & Automation: Skill in building and managing sophisticated CI/CD pipelines (e.g., using Cloud Build, Jenkins, GitLab CI) for applications and infrastructure on GCP. Containerisation & Orchestration: Advanced knowledge of Docker and extensive experience deploying, managing, and scaling applications on CloudRun and/or Google Kubernetes Engine (GKE). API Management & Gateway Proficiency: Experience with API design, security, and lifecycle management, utilizing tools like Google Cloud API Gateway or Apigee for robust API delivery. Advanced Monitoring, Logging & Observability: Expertise in implementing and utilizing comprehensive monitoring solutions (e.g., Google Cloud Operations Suite, Prometheus, Grafana) for proactive issue detection and system insight. DevSecOps & GCP Security Best Practices: Strong ability to integrate security into all stages of the DevOps lifecycle, implement GCP security best practices (IAM, network security, data protection), and ensure compliance. Scripting & Programming for Automation: Proficient in scripting languages (Python, Bash, Go) to automate operational tasks, build custom tools, and manage infrastructure programmatically. GCP Networking Design & Management: In-depth understanding of GCP networking (VPC, Load Balancing, DNS, firewalls) and the ability to design secure and scalable network architectures. Application Deployment Strategies & Microservices on GCP: Knowledge of various deployment techniques (blue/green, canary) and experience deploying and managing microservices architectures within the GCP ecosystem. Leadership, Mentorship & Cross-Functional Collaboration: Proven ability to lead and mentor DevOps teams, drive technical vision, and effectively collaborate with development, operations, and security teams. System Architecture, Performance Optimization & Troubleshooting: Strong skills in designing scalable and resilient systems on GCP, identifying and resolving performance bottlenecks, and complex troubleshooting across the stack. Regards, ValueLabs Show more Show less

Posted 12 hours ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are looking for a Senior DevOps Engineer to join our Life Sciences & Healthcare DevOps team . This is an exciting opportunity to work on cutting-edge Life Sciences and Healthcare products in a DevOps environment. If you love coding in Python or any scripting language, have experience with Linux, and ideally have worked in a cloud environment, we’d love to hear from you! We specialize in container orchestration, Terraform, Datadog, Jenkins, Databricks, and various AWS services. If you have experience in these areas, we’d be eager to connect with you. About You – Experience, Education, Skills, And Accomplishments At least 7+ years of professional software development experience and 5+ years as DevOps Engineer or similar skillsets with experience on various CI/CD and configuration management tools e.g., Jenkins, Maven, Gradle, Jenkins, Spinnaker, Docker, Packer, Ansible, Cloudformation, Terraform, or similar CI/CD orchestrator tool(s). At least 3+ years of AWS experience managing resources in some subset of the following services: S3, ECS, RDS, EC2, IAM, OpenSearch Service, Route53, VPC, CloudFront, Glue and Lambda. 5+ years of experience with Bash/Python scripting. Wide knowledge in operating system administration, programming languages, cloud platform deployment, and networking protocols Be on-call as needed for critical production issues. Good understanding of SDLC, patching, releases, and basic systems administration activities It would be great if you also had AWS Solution Architect Certifications Python programming experience. What will you be doing in this role? Design, develop and maintain the product's cloud infrastructure architecture, including microservices, as well as developing infrastructure-as-code and automated scripts meant for building or deploying workloads in various environments through CI/CD pipelines. Collaborate with the rest of the Technology engineering team, the cloud operations team and application teams to provide end-to-end infrastructure setup Design and deploy secure, resilient, and scalable Infrastructure as Code per our developer requirements while upholding the InfoSec and Infrastructure guardrails through code. Keep up with industry best practices, trends, and standards and identifies automation opportunities, designs, and develops automation solutions that improve operations, efficiency, security, and visibility. Ownership and accountability of the performance, availability, security, and reliability of the product/s running across public cloud and multiple regions worldwide. Document solutions and maintain technical specifications Product you will be developing The Products rely on container orchestration (AWS ECS,EKS), Jenkins, various AWS services (such as Opensearch, S3, IAM, EC2, RDS,VPC, Route53, Lambda, Cloudfront), Databricks, Datadog, Terraform and you will be working to support the Development team build them. About The Team Life Sciences & HealthCare Content DevOps team mainly focus on DevOps operations on Production infrastructure related to Life Sciences & HealthCare Content products. Our team consists of five members and reports to the DevOps Manager. We as a team provides DevOps support for almost 40+ different application products internal to Clarivate and which are source for customer facing products. Also, responsible for Change process on production environment. Incident Management and Monitoring. Team also handles customer raised /internal user service requests. Hours of Work Shift timing 12PM to 9PM. Must provide On-call support during non-business hours per week based on team bandwidth At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations. Show more Show less

Posted 12 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 12 hours ago

Apply

3.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 12 hours ago

Apply

0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Linkedin logo

Are you ready to make your mark with a true industry disruptor? ZineOne, a subsidiary of Session AI, the pioneer of in-session marketing, is looking to add talented team members to help us grow into the premier revenue tool for e-commerce. We work with some of the leading brands nationwide and we innovate how brands connect with and convert customers. Job Description This position offers a hands-on, technical opportunity as a vital member of the Site Reliability Engineering Group. Our SRE team is dedicated to ensuring that our Cloud platform operates seamlessly, efficiently, and reliably at scale. The ideal candidate will bring over five years of experience managing cloud-based Big Data solutions, with a strong commitment to resolving operational challenges through automation and sophisticated software tools. Candidates must uphold a high standard of excellence and possess robust communication skills, both written and verbal. A strong customer focus and deep technical expertise in areas such as Linux, automation, application performance, databases, load balancers, networks, and storage systems are essential. Key Responsibilities: As a Session AI SRE, you will: Design and implement solutions that enhance the availability, performance, and stability of our systems, services, and products Develop, automate, and maintain infrastructure as code for provisioning environments in AWS, Azure, and GCP Deploy modern automated solutions that enable automatic scaling of the core platform and features in the cloud Apply cybersecurity best practices to safeguard our production infrastructure Collaborate on DevOps automation, continuous integration, test automation, and continuous delivery for the Session AI platform and its new features Manage data engineering tasks to ensure accurate and efficient data integration into our platform and outbound systems Utilize expertise in DevOps best practices, shell scripting, Python, Java, and other programming languages, while continually exploring new technologies for automation solutions Design and implement monitoring tools for service health, including fault detection, alerting, and recovery systems Oversee business continuity and disaster recovery operations Create and maintain operational documentation, focusing on reducing operational costs and enhancing procedures Demonstrate a continuous learning attitude with a commitment to exploring emerging technologies Preferred Skills: Experience with cloud platforms like AWS, Azure, and GCP, including their management consoles and CLI Proficiency in building and maintaining infrastructure on: AWS using services such as EC2, S3, ELB, VPC, CloudFront, Glue, Athena, etc Azure using services such as Azure VMs, Blob Storage, Azure Functions, Virtual Networks, Azure Active Directory, Azure SQL Database, etc GCP using services such as Compute Engine, Cloud Storage, Cloud Functions, VPC, Cloud IAM, BigQuery, etc Expertise in Linux system administration and performance tuning Strong programming skills in Python, Bash, and NodeJS In-depth knowledge of container technologies like Docker and Kubernetes Experience with real-time, big data platforms including architectures like HDFS/Hbase, Zookeeper, and Kafka Familiarity with central logging systems such as ELK (Elasticsearch, LogStash, Kibana) Competence in implementing monitoring solutions using tools like Grafana, Telegraf, and Influx Benefits Comparable salary package and stock options Opportunity for continuous learning Fully sponsored EAP services Excellent work culture Opportunity to be an integral part of our growth story and grow with our company Health insurance for employees and dependents Flexible work hours Remote-friendly company Show more Show less

Posted 12 hours ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Role - Network- Wireless Engineer Experience Range: 5 to 10 years Job Locations: Hyderabad Job Description Roles and responsibilities: Experience working in Cisco Aironet 4800,3800,2800 Access Points. Strong knowledge on Cisco 5520 and 5508 Wireless LAN controllers. Strong understanding on Cisco Prime & ISE infrastructure Knowledge on Mobility Service Engine (MSE AIR-MSE-3300 or above) Switching Models Cisco 6500/6900, Nexus 7K, 5K, 2K, VPC, FEX, VDC, Cisco VSS, Cisco Catalyst 4500, 3800, 3500 2900 Strong knowledge in Switching technologies - STP, Trunking, Ether channel, HSRP, VRRP, LACP, PaGP Roles/Responsibilities CCNA Wireless Certified Experience in Cisco Wireless LAN Configuration. Configuration of Light Weight Access Point Solutions, including Clean Air,AVC/QoS. Experience in managing Cisco Wireless LAN Controller. Thorough understanding of 802.11 based wireless architecture. Experience working in Cisco MSE models Good Knowledge on MSE - Context Aware Services (CAS) & Adaptive Wireless Intrusion Prevention System (wIPS) Experience in network (Route/Switch) engineering. Experience in wireless, security engineering. Strong working knowledge installing, configuring, and supporting Cisco Router/Switch and Wireless LAN Hardware products. Cisco knowledge with network design, operational support, hands-on implementation and configuration of routers, hubs, switches, Controllers, Access points, and cabling in a large enterprise LAN/WAN /WLAN environment Show more Show less

Posted 13 hours ago

Apply

5.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Direct Face to Face TCS Interview Hiring for AWS Terraform_Delhi- Yamuna Park Experience: 5 to 8 Years Only Job Location: Delhi Venue: Delhi- Yamuna Park Direct Face to Face TCS Interview Hiring for AWS Terraform_Delhi- Yamuna Park Required Technical Skill Set: Full job description 7+ years of experience with strong expertise in Terraform and Amazon Web Service. Responsible for designing, building and maintaining scalable infrastructure using Infrastructure as code (IaC) principles, with a focus on automation, security and performance Key Responsibilities: Design and implement Infrastructure as Code (IaC) using Terraform. Develop and maintain scalable, resilient and secure Cloud environments in AWS Automate deployment pipelines and support CICD workflows Collaborate with development and operations team to ensure high availability and performance Implement best practices for cloud security, backup and disaster recovery Maintain documentation of infrastructure , processes and policies Technical Skills: 4+ Yrs of hands-on experience with Terraform. EC2, S3, RDS, VPC, IAM, Lambda, CloudWatch, Control Tower. Experience with CI/CD tools such as Jenkins, GitHub actions Strong understanding of networking concepts and security in Cloud. Exposure other Cloud like Azure, GCP. Proficiency with scripting language (Python, PowerShell, Bash) Kind Regards, Priyankha M Show more Show less

Posted 13 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Eviden, part of the Atos Group, with an annual revenue of circa € 5 billion is a global leader in data-driven, trusted and sustainable digital transformation. As a next generation digital business with worldwide leading positions in digital, cloud, data, advanced computing and security, it brings deep expertise for all industries in more than 47 countries. By uniting unique high-end technologies across the full digital continuum with 47,000 world-class talents, Eviden expands the possibilities of data and technology, now and for generations to come. Role Overview The Senior Tech Lead - AWS Data Engineering leads the design, development and optimization of data solutions on the AWS platform. The jobholder has a strong background in data engineering, cloud architecture, and team leadership, with a proven ability to deliver scalable and secure data systems. Responsibilities Lead the design and implementation of AWS-based data architectures and pipelines. Architect and optimize data solutions using AWS services such as S3, Redshift, Glue, EMR, and Lambda. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and ensure alignment with business goals. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in AWS data environments. Stay updated on the latest AWS technologies and industry trends. Key Technical Skills & Responsibilities Overall 10+Yrs of Experience in IT Minimum 5-7 years in design and development of cloud data platforms using AWS services Must have experience of design and development of data lake / data warehouse / data analytics solutions using AWS services like S3, Lake Formation, Glue, Athena, EMR, Lambda, Redshift Must be aware about the AWS access control and data security features like VPC, IAM, Security Groups, KMS etc Must be good with Python and PySpark for data pipeline building. Must have data modeling including S3 data organization experience Must have an understanding of hadoop components, No SQL database, graph database and time series database; and AWS services available for those technologies Must have experience of working with structured, semi-structured and unstructured data Must have experience of streaming data collection and processing. Kafka experience is preferred. Experience of migrating data warehouse / big data application to AWS is preferred . Must be able to use Gen AI services (like Amazon Q) for productivity gain Eligibility Criteria Bachelor’s degree in Computer Science, Data Engineering, or a related field. Extensive experience with AWS data services and tools. AWS certification (e.g., AWS Certified Data Analytics - Specialty). Experience with machine learning and AI integration in AWS environments. Strong understanding of data modeling, ETL/ELT processes, and cloud integration. Proven leadership experience in managing technical teams. Excellent problem-solving and communication skills. Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture Let’s grow together. Show more Show less

Posted 13 hours ago

Apply

0 years

0 Lacs

Lucknow, Uttar Pradesh, India

On-site

Linkedin logo

Role and Responsibilities Provide third-level support for AWS Workspaces, including advanced troubleshooting and problem resolution. Configure and manage AWS Workspaces, including Provisioning, Scaling, and optimizing performance. Lead the deployment, configuration, and management of AWS Workspace environments. Design and implement solutions to improve performance, scalability, and reliability of AWS Workspaces. Serve as a technical expert and mentor for L1 and L2 support teams. Develop and maintain documentation for AWS Workspace configurations, processes, and procedures. Participate in root cause analysis and post-incident reviews to identify and address recurring issues. Evaluate and recommend new technologies and tools to enhance AWS Workspace deployments. Monitoring AWS Workspace performance metrics in CloudWatch. Well versed with ITSM tools like ServiceNow. ITIL Aware (Incident/Change/Problem management) Relevant Technology AWS Workspaces Amazon EC2 AWS CloudWatch AWS CloudFormation Amazon VPC IAM (Identity and Access Management) Windows and Linux operating systems Networking fundamentals (DNS, TCP/IP, VPN) Automation and scripting (e.g., Python, PowerShell, Bash) Monitoring and logging tools (e.g., CloudWatch, Splunk) Required Certifications AWS Certified Solutions Architect – Professional AWS Certified DevOps Engineer – Professional Certified Information Systems Security Professional (CISSP) Microsoft Certified: Azure Solutions Architect Expert (optional) Show more Show less

Posted 13 hours ago

Apply

2.0 - 4.0 years

10 - 20 Lacs

Mumbai

Work from Office

Naukri logo

DevOps Engineer: Congratulations, you have taken the first step towards bagging a career-defining role. Join theteam of superheroes that safeguard data wherever it goes. What should youknow about us? Seclore protects and controls digital assets to help enterprises preventdata theft and achieve compliance. Permissions and access to digital assets canbe granularly assigned and revoked, or dynamically set at the enterprise-level, including when shared with external parties. Asset discovery and automatedpolicy enforcement allow enterprises to adapt to changing security threats andregulatory requirements in real-time and at scale. Know more about us at www.seclore.com You would love our tribe: If you are a risk-taker, innovator, and fearless problem solver wholoves solving challenges of data security, then this is the place for you! Role: DevOps Engineer Experience: 2-4 Years Location: Mumbai (Regional Office) A sneak peek intothe role: This position is for individuals who possess the ability to identify multiple solutions to the same problem and can help in decision making while working in a super-agile environment Here's what you will get to explore: In this role you will be using AWS CDK & Python to design, develop, test, secure & deploy services from planning to production. Combine technology, tools, and global best practices of DevOps for innovation, efficiency, and compliance Build Infrastructure-as-a-code using DevOps tools and technologies to commission and configure, monitor, and maintain Seclore cloud offering on AWS Automate application deployment, configuration, and testing for scalable and fault-tolerant delivery Continuously improve automation and monitoring tools for better effectiveness and efficiency Understand Seclore product features, technology platform and deployment and configuration options Work closely with other team to understand requirements, upcoming features, and impact on the cloud infrastructure to ensure DevOps requirements are communicated and understood We can see the next Entrepreneur At Seclore if you: A technical degree (Engineering, MCA) from a reputed institute 1year of DevOps experience 2+ Proven hands-on experience in Python programming language with design, code, debugging skills Working knowledge of Infrastructure as code Good verbal and written communication skills to interact with technical and non-technical staff. An analytical frame of mind to identify and evaluate multiple solutions to the same problem and come up with a solution roadmap. GOOD TO HAVE Experience as an automation developer for a cloud Product. Experience in software development, automaton, or DevOps. Experience with cloud environments such as GCP, AWS etc. (AWS is an advantage) Experience of writing applications or automation tools using one or more of Jenkins, Ansible, Batch script, Shell script etc Experience of working with containerization and orchestration tools like Docker, Kubernetes, ECS etc Are tech agnostic, think innovatively and take calculated risk Why do we call SecloritesEntrepreneurs not Employee: We have an attitude of a problem solver and an aptitude that is techagnostic. You get to work with the smartest minds in the business. We value and support those who take the initiative and calculate risks. We are thriving not living. At Seclore, it is not just about work butabout creating outstanding employee experiences. Our supportive and openculture enables our team to thrive. Excited to be the next Entrepreneur, apply today! Dont have some of the above points in your resume at the moment? Dontworry. We will help you build it. Lets build thefuture of data security at Seclore together.

Posted 13 hours ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Design, provision, and document a production-grade AWS micro-service platform for a Apache-powered ERP implementation—hitting our 90-day “go-live” target while embedding DevSecOps guard-rails the team can run without you. Key Responsibilities Cloud Architecture & IaC Author Terraform modules for VPC, EKS (Graviton), RDS (MariaDB Multi-AZ), MSK, ElastiCache, S3 lifecycle, API Gateway, WAF, Route 53. Implement node pools (App, Spot Analytics, Cache, GPU) with Karpenter autoscaling. CI/CD & GitOps Set up GitHub Actions pipelines (lint, unit tests, container scan, Terraform Plan). Deploy Argo CD for Helm-based application roll-outs (ERP, Bot, Superset, etc.). DevSecOps Controls Enforce OPA Gatekeeper policies, IAM IRSA, Secrets Manager, AWS WAF rules, ECR image scanning. Build CloudWatch/X-Ray dashboards; wire alerting to Slack/email. Automation & DR Define backup plans (RDS PITR, EBS, S3 Std-IA → Glacier). Document cross-Region fail-over run-book (Route 53 health-checks). Standard Operating Procedures Draft SOPs for patching, scaling, on-call, incident triage, budget monitoring. Knowledge Transfer (KT) Run 3× 2-hour remote workshops (infra deep-dive, CI/CD hand-over, DR drill). Produce “Day-2” wiki: diagrams (Mermaid), run-books, FAQ. Required Skill Set 8+ yrs designing AWS micro-service / Kubernetes architectures (ideally EKS on Graviton). Expert in Terraform , Helm , GitHub Actions , Argo CD . Hands-on with RDS MariaDB , Kafka (MSK) , Redis , SageMaker endpoints . Proven DevSecOps background: OPA, IAM least-privilege, vulnerability scanning. Comfortable translating infra diagrams into plain-language SOPs for non-cloud staff. Nice-to-have: prior ERP deployment experience; WhatsApp Business API integration; EPC or construction IT domain knowledge. How Success Is Measured Go-live readiness — Production cluster passes load, fail-over, and security tests by Day 75. Zero critical CVEs exposed in final Trivy scan. 99 % IaC coverage — manual console changes not permitted. Team self-sufficiency — internal staff can recreate the stack from scratch using docs + KT alone. Show more Show less

Posted 14 hours ago

Apply

7.0 years

0 Lacs

India

Remote

Linkedin logo

Position: DevOps Engineer Location: India (Remote or On-site Noida) Job Type: Full-Time Department: Technology We are seeking a highly motivated and experienced DevOps Engineer to join our DevOps team. In this role, you will be responsible for designing, implementing, and maintaining a scalable, secure, and high-performance AWS-based cloud infrastructure that supports our cutting-edge diagnostic platforms and software products. You will work closely with software develops, project managers, and bioinformatics teams to ensure smooth deployment and operation of applications and pipelines in a dynamic environment. Your tasks will be: · Design, deploy, and maintain secure, scalable AWS infrastructure, including VPC, EC2, IAM, Auto Scaling, and CloudFront · Build and manage CI/CD pipelines using GitLab CI/CD, GitHub Actions, and automate deployments via CodePipeline, CodeBuild, and CodeDeploy · Implement Infrastructure as Code (IaC) with Terraform for reproducible and modular infrastructure provisioning · Manage containerized applications using Docker, Amazon ECS, and EKS, ensuring high availability and performance · Securely manage secrets and configurations using AWS Secrets Manager and SSM, and integrate best practices across workflows · Develop event-driven and serverless workflows using AWS Lambda, Step Functions, SQS, SNS, and EventBridge · Work collaboratively with development and IT teams to implement best practices across the pipeline You have the following skills and qualifications: · 7+ years of professional experience in DevOps or Cloud Engineering, with a strong focus on AWS ecosystems · Solid hands-on experience with core AWS services, including but not limited to: o EC2, S3, RDS, VPC, IAM, CloudFront, Route 53 o Lambda, ECS/EKS, Step Functions, CloudWatch o CodePipeline, CodeBuild, CodeDeploy, SQS, SNS, EventBridge · Strong scripting skills in Python, Bash, or similar languages · Proficiency with Terraform, and experience designing reusable and modular infrastructure components · Solid understanding of networking, security, and high availability in cloud infrastructure · Familiarity with containerization and orchestration (e.g., Docker, Kubernetes, AWS Fargate) · Experience implementing monitoring and observability tools · Knowledge of DevOps best practices, automation frameworks, and agile development processes · Excellent analytical and troubleshooting skills · Strong communication skills and the ability to collaborate in a cross-functional, international team Show more Show less

Posted 14 hours ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Equifax is where you can power your possible. If you want to achieve your true potential, chart new paths, develop new skills, collaborate with bright minds, and make a meaningful impact, we want to hear from you. Job Description We are seeking a highly skilled and motivated Google Cloud Engineer to join our dynamic engineering team. In this role, you will be instrumental in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). You will work closely with development, operations, and security teams to ensure our cloud environment is scalable, secure, highly available, and cost-optimized. If you are passionate about cloud native technologies, automation, and solving complex infrastructure challenges, we encourage you to apply.. What You Will Do Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) tools like Terraform. Deploy, configure, and manage core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components (VPC, Cloud Load Balancing, Cloud CDN). Develop and maintain CI/CD pipelines for automated deployment and release management using tools like Cloud Build, GitLab CI/CD, GitHub Actions or Jenkins. Implement and enforce security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. Monitor cloud infrastructure and application performance, identify bottlenecks, and implement solutions for optimization and reliability. Troubleshoot and resolve complex infrastructure and application issues in production and non-production environments. Collaborate with development teams to ensure applications are designed for cloud-native deployment, scalability, and resilience. Participate in on-call rotations for critical incident response and provide timely resolution to production issues. Create and maintain comprehensive documentation for cloud architecture, configurations, and operational procedures. Stay current with new GCP services, features, and industry best practices, proposing and implementing improvements as appropriate. Contribute to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. What Experience You Need Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 6+ years of hands-on experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. 3+ years of experience with cloud platforms (GCP preferred), including designing and deploying cloud-native applications. 3+ years of experience with source code management (Git), CI/CD pipelines, and Infrastructure as Code. Strong experience with Javascript and a modern Javascript framework, VueJS preferred. Proven ability to lead and mentor development teams. Strong understanding of microservices architecture and serverless computing. Experience with relational databases (SQL Server, PostgreSQL). Excellent problem-solving, analytical, and communication skills. Experience working in Agile/Scrum environments. What Could Set You Apart GCP Cloud Certification. UI development experience (e.g., HTML, JavaScript, Angular, Bootstrap) Experience in Agile environments (e.g., Scrum, XP) Relational database experience (e.g., SQL Server, PostgreSQL) Experience with Atlassian tooling (e.g., JIRA, Confluence, and Github) Working knowledge of Python Excellent problem-solving and analytical skills and the ability to work well in a team We offer a hybrid work setting, comprehensive compensation and healthcare packages, attractive paid time off, and organizational growth potential through our online learning platform with guided career tracks. Are you ready to power your possible? Apply today, and get started on a path toward an exciting new career at Equifax, where you can make a difference! Who is Equifax? At Equifax, we believe knowledge drives progress. As a global data, analytics and technology company, we play an essential role in the global economy by helping employers, employees, financial institutions and government agencies make critical decisions with greater confidence. We work to help create seamless and positive experiences during life’s pivotal moments: applying for jobs or a mortgage, financing an education or buying a car. Our impact is real and to accomplish our goals we focus on nurturing our people for career advancement and their learning and development, supporting our next generation of leaders, maintaining an inclusive and diverse work environment, and regularly engaging and recognizing our employees. Regardless of location or role, the individual and collective work of our employees makes a difference and we are looking for talented team players to join us as we help people live their financial best. Equifax is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Show more Show less

Posted 14 hours ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Cloud Security Consultant Location: Mumbai Experience: 5+ years Availability: Immediate Joiners Preferred Job Description: We are seeking an experienced Cloud Security Consultant to implement and maintain robust cloud security standards across leading platforms (AWS, Azure, GCP). The candidate must have a deep understanding of cloud provisioning, identity and access management, encryption standards, and network security. Key Responsibilities: Implement Secure Cloud Account & Environment Provisioning Standards (SCAEPS) including: Account/subscription setup protocols Root/owner account security controls Baseline configurations and naming standards Deploy and manage Cloud IAM Technical Baseline (IAMTB) such as: Password policies, RBAC, and MFA enforcement SSO/federation with enterprise identity systems Secure management of service principals and cross-account access Design and implement Network Security Configurations (NSCD) : Secure VPC/VNet design and subnet configurations Routing, firewall, and IDS/IPS configurations Enforce Data Encryption Standards (DETS) : AES-256 encryption and KMS key lifecycle management TLS/SSL configuration and certificate management Apply Cloud Storage Security Configurations (CSSCD) : Prevent public access to storage Encryption and access policy implementation for cloud storage Requirements: Minimum 5 years of experience in cloud security Hands-on experience with AWS/Azure/GCP security best practices Expertise in IAM, encryption, and network architecture Strong knowledge of regulatory standards (e.g., ISO, NIST, CIS) Relevant certifications preferred: AZ-500, AWS Security Specialty, CCSP, etc. Show more Show less

Posted 14 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies