Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 10.0 years
12 - 18 Lacs
Hyderabad
Hybrid
Role & Responsibilities Role Overview: We are seeking a talented and forward-thinking DevOps Engineer for one of the large financial services GCC based in Hyderabad with responsibilities including designing, implementing, and maintaining CI/CD pipelines, monitoring system performance, automating deployments, ensuring infrastructure scalability and security, collaborating with development and IT teams, and optimizing workflow efficiency. Technical Requirements: Experienced in setting and delivering DevOps strategy Proficient in collaborating with engineering teams to understand their needs Skilled in setting up, maintaining, optimizing, and evolving DevOps tooling and infrastructure Strong knowledge of automating development, quality engineering, deployment, and release processes Familiarity with Agile and Waterfall methodologies and supporting toolchains Ability to identify technical problems and develop effective solutions Hands-on experience with a variety of technologies including Git, Kubernetes, Docker, Jenkins, and scripting/programming languages Competence in implementing DevOps and Agile patterns such as CI/CD pipelines, source code management, automation, and infrastructure as code Understanding of IT management practices, software currency, and security measures Experience in GCP infrastructure, Terraform, Harness for CI/CD automation, and deployments Proficiency in team leadership, communication, and problem-solving skills Functional Requirements: Demonstrated team leadership and DevOps experience Exposure to GCP infrastructure including Compute Engine, VPC, IAM, Cloud Functions, and GKE Hands-on experience with various DevOps technologies such as Git, Kubernetes, Docker, Jenkins, SonarQube, and scripting/programming languages Strong organizational, time management, and multitasking skills Ability to work collaboratively, build relationships, and adapt to various domains and disciplines Passion for developing new technologies and optimizing software delivery processes Understanding of security compliance, networking, and firewalls Willingness to learn, grow, and develop within a supportive and inclusive environment Ability to propose new technologies and methodologies for software delivery optimization This role offers a compelling opportunity for a seasoned DevOps Engineering to drive transformative cloud initiatives within the financial sector, leveraging unparalleled experience and expertise to deliver innovative cloud solutions that align with business imperatives and regulatory requirements. Qualification Engineering Grad / Postgraduate CRITERIA Helm experience Networking and security (firewalls, IAM roles) experience Security compliance understanding Relevant Experience: 6-9 years
Posted 1 week ago
3.0 - 5.0 years
5 - 7 Lacs
Chennai, Tamil Nadu
Work from Office
Duration: 12Months Work Type: Onsite Position Description: We seeking an experienced GCP Data Engineer who can build cloud analytics platform to meet ever expanding business requirements with speed and quality using lean Agile practices. You will work on analyzing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the Google Cloud Platform (GCP). You will be responsible for designing the transformation and modernization on GCP, as well as landing data from source applications to GCP. Experience with large scale solution and operationalization of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on Google Cloud Platform. Skills Required: Experience in working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Implement methods for automation of all parts of the pipeline to minimize labor in development and production Experience in analyzing complex data, organizing raw data and integrating massive datasets from multiple data sources to build subject areas and reusable data products Experience in working with architects to evaluate and productionalize appropriate GCP tools for data ingestion, integration, presentation, and reporting Experience in working with all stakeholders to formulate business problems as technical data requirement, identify and implement technical solutions while ensuring key business drivers are captured in collaboration with product management Proficient in Machine Learning model architecture, data pipeline interaction and metrics interpretation. This includes designing and deploying a pipeline with automated data lineage. Identify, develop, evaluate and summarize Proof of Concepts to prove out solutions. Test and compare competing solutions and report out a point of view on the best solution. Integration between GCP Data Catalog and Informatica EDC. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Skills Preferred: Strong drive for results and ability to multi-task and work independently Self-starter with proven innovation skills Ability to communicate and work with cross-functional teams and all levels of management Demonstrated commitment to quality and project timing Demonstrated ability to document complex systems Experience in creating and executing detailed test plans Experience Required: 3 to 5 Yrs Education Required: BE or Equivalent
Posted 3 weeks ago
4.0 - 7.0 years
8 - 14 Lacs
Noida
Hybrid
Data Engineer (L3) || GCP Certified Employment Type : Full-Time Work Mode : In-office/ Hybrid Notice : Immediate joiners As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development ""scrums"" and solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills : Design, develop, and support data pipelines and related data products and platforms. Design and build data extraction, loading, and transformation pipelines and data products across on-prem and cloud platforms. Perform application impact assessments, requirements reviews, and develop work estimates. Develop test strategies and site reliability engineering measures for data products and solutions. Participate in agile development ""scrums"" and solution reviews. Mentor junior Data Engineers. Lead the resolution of critical operations issues, including post-implementation reviews. Perform technical data stewardship tasks, including metadata management, security, and privacy by design. Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies Demonstrate SQL and database proficiency in various data engineering tasks. Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. Develop Unix scripts to support various data operations. Model data to support business intelligence and analytics initiatives. Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). data pipelines, agile development,scrums, GCP Data Technologies, Python, DAGs, Control-M, Apache Airflow, Data solution architecture Qualifications : Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. 4+ years of data engineering experience. 2 years of data solution architecture and design experience. GCP Certified Data Engineer (preferred). Job Type : Full-time
Posted 3 weeks ago
4 - 8 years
8 - 15 Lacs
Bengaluru
Hybrid
Job Description: 4+ years of Experience in Compute Hardware troubleshooting. (L2) Install, administer, and maintain hardware infrastructure. Diagnose and correct system issues, whether these be issues with correct operation or performance. Reinstate integrity of system as quickly as possible following an outage in order to minimize downtime. Triage and solve user-submitted tickets, especially when they relate to the infrastructure. Track resource usage using monitoring and queuing software. Actively participate in Knowledge Management by creating new technical documents. Patch system firmware and software as needed. Peer assistance is an added trait. Technical Skills: Primary : HPE Proliant DL / Apollo / Blade Server, Cisco Server, OneView Secondary : Hypervisor (Vmware) Demonstrated expertise with Hardware administration, including OS (Vmware/Linux/Windows) Expertise with high-speed networking such as InfiniBand and 10/40 Gigabit Ethernet. Familiarity with Hardware products like: HPE Proliant, Apollo, Blade,SDFlex and Synergy Experience with Cisco Hardware (UCS series, UCS Manager). Experience in server hardware and troubleshooting. Experience managing multi node clustered setup. Experience using and supporting appliance like Oneview, iLO, BMC, UCS Manager Knowledge of Server profile and fault tolerance. Experience on troubleshooting of Virtualization (Vmware). Familiarity with monitoring tools like Grafana/Nagios/Opsramp. Knowledge on troubleshooting of ESXi and vCenter performance issues. Familiarity with the Server Storage connectivity basics. Good to have basic understanding of Nimble/Netapp/Pure/Cloudian/Data Protect cluster solution. Experience in Incident/Change/Problem management and Root Cause Analysis. Business Skills: Demonstrate strong written and verbal communication skills. Interacting and collaborating across different technology teams within HPE. Must work towards achieving HPEs vision for our customers. Affinity and a thorough understanding of support processes defined within HPE. Ability to work in a 24x7 environment in rotation shifts Exhibit Customer First and Customer Last Attitude consistently. Ability to drive cases to closure and provide Case Summary. Demonstrate high level of technical & communication skills. Takes responsibility for end-to-end problem ownership and its solutions. backup and storage and infrastructure and (vmware or virtualization) and compute * Apollo, synergy, vmware esxi, virtualization, vsphere, comvaultback up, storage array, emx, ibm netapp, HPE, cloud storage, zerto management * Compute will discuss.: pradeep * Compute L2: HP apolo, blade, 4+ years of exp, primary is compute and secondary is vmware and linux, aroba switches, cisco like ucs series, *Should know how hardware and os interacts, hp server candidate would know ilo, ciso server people will have bmc, *Monitoring is tool agnostic: *Primary skill is hardware, secondary is vmware or linux, Zerto: Victor Should know hypervisor or virtualization, DR-Disaster recovery, migration, failover, replication, crossite replication, We are not looking for Zerto implementation or deployment, day to day operations knowledge of zerto would be added advantage Candidate should have DR and migration understanding. Coudian storage: Filestorage technology, file page storage like NAS, troubleshooting and configuration Object storing knowledge would be added advantage.
Posted 2 months ago
1 - 3 years
3 - 7 Lacs
Mumbai
Work from Office
about the role Cloud Engineers with experience in managing, planning, architecting, monitoring, and automating large scale deployments to Public Cloud.you will be part of a team of talented engineers to solve some of the most complex and exciting challenges faced in IT Automation and Hybrid Cloud Deployments. key responsibilities Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies Design, deploy and maintain Cloud infrastructure for Clients Domestic & International Develop tools and automation to make platform operations more efficient, reliable and reproducible Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners Take initiatives to lead, drive and solve during challenging scenarios preferred qualifications 1-3 years experience in Cloud Infrastructure and Operations domains Experience with Linux systems and/OR Windows servers Specialize in one or two cloud deployment platforms: AWS, GCP, Azure Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine) Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net) Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios Logging and Monitoring tools (ELK, Stackdriver, CloudWatch) DevOps Technologies Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos) Deep experience in customer facing roles with a proven track record of effective verbal and written communications Dependable and good team player Desire to learn and work with new technologies Automation in your blood
Posted 2 months ago
3 - 5 years
12 - 15 Lacs
Bengaluru
Hybrid
Job Description: As a DevOps Engineer, you will play a crucial role in supporting our enterprise customers by leveraging your expertise in Google Cloud Platform (GCP), Private Cloud environments, Kubernetes, Docker, Jenkins, Prometheus, ELK, and more. You will work closely with clients, understand their needs, and provide tailored solutions to ensure their success. Your ability to learn new technical domains quickly and train others will be key to your success in this role. Responsibilities: Provide expert-level support for cloud migration and DevOps solutions to enterprise customers. Manage and optimize GCP resources, including GKE, Compute Engine, SQL/No-SQL databases, and Cloud Storage, etc. Deploy, monitor, and manage Kubernetes clusters and Docker containers. Implement and manage CI/CD pipelines using Jenkins. Monitor system performance and ensure high availability using Prometheus. Set up and maintain log management and analysis using the ELK stack. Support and optimize private cloud environments. Collaborate with cross-functional teams to design and implement infrastructure solutions that meet customer needs. Deliver exceptional client-facing support, demonstrating a passion for customer success. Continuously improve processes and systems to enhance efficiency and reliability. Qualifications: Bachelors degree in Computer Science or equivalent work experience. 3-5 years of total experience in cloud technical support and DevOps. Proven expertise in Google Cloud Platform, including GKE, Compute Engine, SQL/No-SQL databases, and Cloud Storage. Extensive experience with Kubernetes and Docker. Proficiency in CI/CD tools, especially Jenkins / Gitlab. Strong knowledge of monitoring and logging tools such as Prometheus and the ELK stack. Experience with infrastructure as code tools such as Terraform or Ansible. Demonstrated ability to quickly learn new technical domains and train others. Customer obsession and a passion for delighting customers. Preferred Qualifications: Experience in client-facing roles supporting enterprise customers. Certifications in Google Cloud Platform or related technologies. Knowledge of additional cloud platforms (e.g., AWS, Azure) is a plus. Looking for immediate joiner only Should be confortable in working as contract employee Rakuten is committed to cultivating and preserving a culture of inclusion and connectedness. We are able to grow and learn better together with a diverse team and inclusive workforce. The collective sum of the individual differences, life experiences, knowledge, innovation, self-expression, and talent that our employees invest in their work represents not only part of our culture, but our reputation and Rakuten’s achievement as well. In recruiting for our team, we welcome the unique contributions that you can bring in terms of their education, opinions, culture, ethnicity, race, sex, gender identity and expression, nation of origin, age, languages spoken, veteran’s status, color, religion, disability, sexual orientation, and beliefs.”
Posted 3 months ago
9 - 14 years
15 - 30 Lacs
Bengaluru
Work from Office
Strong knowledge of GCP services, including but not limited to: Hands-on GCP networking skills (e.g. Shared Virtual Private Cloud (VPC), subnetworks, Firewall Rules, Cloud Router, Cloud DNS, Load Balancing, Interconnect, etc.). Solid understanding of CI/CD Experience in automated CI/CD pipeline tooling.
Posted 3 months ago
4 - 8 years
6 - 10 Lacs
Mumbai
Work from Office
Site Reliability Engineers (SREs) - Robust background in Google Cloud Platform (GCP) | RedHat OpenShift administration Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification
Posted 3 months ago
4 - 9 years
6 - 11 Lacs
Mumbai
Hybrid
Role: Site Reliability Engineers (SREs) in Google Cloud Platform (GCP) and RedHat OpenShift administration. Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification Employee Type: Permanent
Posted 3 months ago
4 - 8 years
20 - 25 Lacs
Pune
Hybrid
Experience: 4-8years Job Location: Pune, Hybrid Primary Skills : Terraform,Python, Shell, JSON, automation tools Secondary Skills : ITSM, JIRA, GitHub, CICD practices, json, API invocation Role purpose: Min 4+ years experience in creating GCP infrastructure with data integration patterns for streaming and batch load processes for large scale data platforms / data warehouses Good knowledge and experience in Terraform, Unix, Network Basics, Docker, Kubernetes, Google Cloud Knowledge(VPC, Compute Engine, Load Balancer, Cloud Build), Helm Good knowledge and experience in using CI-CD Pipelines, Jenkins, Cloud build Good understanding of GCP cloud platform Hands on experience in terraform Knowledge and experience in working in agile lean methodologies Preferred GCP DevOps certified, Hashicorp Terraform Certification. Essential Prior work experience of working in DWH Good knowledge of Data Flow, BigQuery , Composer, Stack driver Relevant work experience (3 to 5+) years Desired ITIL Telecom Domain Knowledge GCP Data Engineer
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2