Home
Jobs

42 Cloud Orchestration Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3 - 8 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Google Cloud Data Services, Python (Programming Language), Apache Airflow, Data Engineering Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support business needs and enable data-driven decision-making. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Design and develop data solutions for data generation, collection, and processing. Create and maintain data pipelines to ensure efficient data flow. Implement ETL (extract, transform, load) processes to migrate and deploy data across systems. Ensure data quality and integrity by performing data validation and cleansing. Collaborate with cross-functional teams to understand data requirements and provide technical expertise. Optimize data infrastructure and performance to support business needs. Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must Have Skills:Strong Proficiency in Python (Programming Language), Apache Airflow and Google Cloud Data Services. Must Have Skills:3+ years' experience in python programming experience in complex data structures as well as data pipeline development. Must Have Skills:3+ years' experience in python libraires Airflow, Pandas, Py Spark, Redis, SQL (or similar libraries). Must Have Skills:3+ Strong SQL programming experience in Mid/Advance functions like Aggregate Functions (SUM, AVG), Conditional Functions (CASE WHEN, NULLIF), Mathematical Functions (ROUND, ABS), Ranking Functions (RANK) and Windowing Functions etc. Nice to Have Skills:3+ years' experience in Dataflow, dataproc, Cloud Composer, Big Query, Cloud Storage, GKE, etc. Nice to Have Skills:Alternative to Google Cloud, data pipeline development using python for 3+ years for any other cloud platform can be considered. Additional Information: The candidate should have a minimum of 3 years of experience in Google Cloud Data Services. This position is based in Pune. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Managed File Transfer Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Cloud Services Engineer, you will act as a liaison between the client and Accenture operations teams for support and escalations. You will communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Expected to be an SME. Collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Ensure effective communication between client and operations teams. Analyze service delivery health and address performance issues. Conduct performance meetings to share data and trends. Professional & Technical Skills: Must To Have Skills:Proficiency in Managed File Transfer. Strong understanding of cloud orchestration and automation. Experience in SLA management and performance analysis. Knowledge of IT service delivery and escalation processes. Additional Information: The candidate should have a minimum of 5 years of experience in Managed File Transfer. This position is based at our Pune office. A 15 years full time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

3 - 6 years

5 - 8 Lacs

Kolkata

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: "Must Have: End to End functional knowledge of the data pipeline/transformation, implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done. Expert in SQL can do data analysis and investigation using Sql queries Implementation Knowledge Advance Sql functions like Regular Expressions, Aggregation, Pivoting, Ranking, Deduplication etc. BigQuery and BigQuery Transformation (using Stored Procedures) Data modelling concepts Star & Snowflake schemas, Fact & Dimension table, Joins, Cardinality etc GCP Services related to data pipelines like Workflow, Cloud Composer, Cloud Schedular, Cloud Storage etc Understanding of CI/CD & related tools Git & Terraform Other GCP Services like Dataflow, Cloud Build, Pub/Sub, Cloud Functions, Cloud Run, Cloud Workstation etc BigQuery Performance tuning, Python based API development exp Spark development exp Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 3 months ago

Apply

4 - 7 years

5 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: "Must Have: End to End functional knowledge of the data pipeline/transformation, implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done. Expert in SQL can do data analysis and investigation using Sql queries Implementation Knowledge Advance Sql functions like Regular Expressions, Aggregation, Pivoting, Ranking, Deduplication etc. BigQuery and BigQuery Transformation (using Stored Procedures) Data modelling concepts Star & Snowflake schemas, Fact & Dimension table, Joins, Cardinality etc GCP Services related to data pipelines like Workflow, Cloud Composer, Cloud Schedular, Cloud Storage etc Understanding of CI/CD & related tools Git & Terraform Other GCP Services like Dataflow, Cloud Build, Pub/Sub, Cloud Functions, Cloud Run, Cloud Workstation etc BigQuery Performance tuning, Python based API development exp Spark development exp Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 3 months ago

Apply

7 - 11 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Google Cloud Data Services Good to have skills : Apache Spark, Google Cloud Dataflow Minimum 7.5 year(s) of experience is required Educational Qualification : Bachelors of engineering Project Role :Software Development Engineer Project Role Description :Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have Skills :Google Cloud Data ServicesGood to Have Skills : Apache Spark, Google Cloud DataflowJob Requirements :Key Responsibilities :1 Strong knowledge of GCP services such as Cloud Storage, Big Query, Dataflow, Dataproc, Cloud Composer, Pub/Sub, Airflow, DAG etc 2 Experience in data and analytics, including cloud technologies 3 Experience of Finance/Revenue domain will be an added advantage 4 Experience with GCP Migration activities will be an added advantage 5 Experience in SDLC with emphasis on specifying, building, and testing mission critical business applications Technical Experience :1 Should have worked on Hadoop/Big data project and good SQL experiencehive,Big Query 2 Should be comfortable with git, jenkinsCI/CD 3 Should be good in Python/Hadoop/Spark 4 Strong knowledge of GCP services, especially Big Query, data warehouse concepts 5 Designing, implementing, and maintaining data infrastructure and pipelines on the Google Cloud Platform GCP Professional Attributes :1 Strong analytical, inter personal communication skills 2 Must possess impeccable communication skills, both in verbal and in written form 3 Proficient in identifying, analyzing and solving problems 4 Client facing experience Educational Qualification:Bachelors of engineeringAdditional Info :Level flex, Location - only look for Bengaluru & Gurugram ACN facility. Qualification Bachelors of engineering

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Gurgaon

Work from Office

Naukri logo

Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Network Infrastructures Good to have skills : Cisco Identity Services Engine (ISE) Minimum 5 year(s) of experience is required Educational Qualification : -Bachelor degree in information technology, software engineering, computer science, or a related Summary :As a Cloud Services Engineer, you will be responsible for ensuring Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Your typical day will involve acting as a liaison between the client and Accenture operations teams for support and escalations, communicating service delivery health to all stakeholders, and holding performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Act as a liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Professional & Technical Skills: Primary Skill:Network Infrastructures Good To Have Skills:Cisco Identity Services Engine (ISE) Experience in Cloud orchestration and automation. Experience in managing and monitoring Cloud infrastructure. Experience in troubleshooting and resolving Cloud infrastructure issues. Additional Information: The candidate should have a minimum of 5 years of experience in Network Infrastructures. The JOB FAMILY and PROJECT ROLE information are not for candidate's experience. This position is based at our Gurugram office. Qualifications -Bachelor degree in information technology, software engineering, computer science, or a related

Posted 3 months ago

Apply

3 - 5 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Service Integration and Management (SIAM) Good to have skills : Infrastructure Service Management Minimum 3 year(s) of experience is required Educational Qualification : Graduate and above Summary :As a Cloud Services Engineer, you will be responsible for ensuring Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Your typical day will involve acting as a liaison between the client and Accenture operations teams for support and escalations, communicating service delivery health to all stakeholders, and holding performance meetings to share performance and consumption data and trends. Roles & Responsibilities: Act as a liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Professional & Technical Skills: Must To Have Skills:Service Integration and Management (SIAM). Good To Have Skills:Infrastructure Service Management. Experience in Cloud orchestration and automation. Strong understanding of SLAs and performance metrics. Excellent communication and stakeholder management skills. Additional Information: The candidate should have a minimum of 3 years of experience in Service Integration and Management (SIAM). The ideal candidate will possess a strong educational background in computer science, information technology, or a related field, along with a proven track record of delivering impactful solutions. This position is based at our Bengaluru office. Qualification Graduate and above

Posted 3 months ago

Apply

8 - 13 years

25 - 30 Lacs

Pune

Work from Office

Naukri logo

Senior engineer is responsible for developing and delivering elements of engineering solutions to accomplish business goals. Awareness is expected of the important engineering principles of the bank. Root cause analysis skills develop through addressing enhancements and fixes 2 products build reliability and resiliency into solutions through early testing peer reviews and automating the delivery life cycle. Successful candidate should be able to work independently on medium to large sized projects with strict deadlines. Successful candidates should be able to work in a cross application mixed technical environment and must demonstrate solid hands-on development track record while working on an agile methodology. The role demands working alongside a geographically dispersed team. The position is required as a part of the buildout of Compliance tech internal development team in India. The overall team will primarily deliver improvements in compliance tech capabilities that are major components of the regular regulatory portfolio addressing various regulatory common commitments to mandate monitors. What we ll offer you As part of our flexible scheme, here are just some of the benefits that you ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Analyzing data sets and designing and coding stable and scalable data ingestion workflows also integrating into existing workflows Working with team members and stakeholders to clarify requirements and provide the appropriate ETL solution. Work as a senior developer for developing analytics algorithm on top of ingested data. Work as though senior developer for various data sourcing in Hadoop also GCP. Own unit testing UAT deployment end user sign off and prod go live. Ensuring new code is tested both at unit level and system level design develop and peer review new code and functionality. Operate as a team member of an agile scrum team. Root cause analysis skills to identify bugs and issues for failures. Support Prod support and release management teams in their tasks. Your skills and experience More than 10+ years of coding experience in experience and reputed organizations Hands on experience in Bitbucket and CI/CD pipelines Proficient in Hadoop, Python, Spark, SQL Unix and Hive Basic understanding of on Prem and GCP data security Hands on development experience on large ETL/ big data systems .GCP being a big plus Hands on experience on cloud build, artifact registry ,cloud DNS ,cloud load balancing etc. Hands on experience on Data flow, Cloud composer, Cloud storage ,Data proc etc. Basic understanding of data quality dimensions like Consistency, Completeness, Accuracy, Lineage etc. Hands on business and systems knowledge gained in a regulatory delivery environment. Desired Banking experience regulatory and cross product knowledge. Passionate about test driven development. Prior experience with release management tasks and responsibilities. Data visualization experience is good to have. How we ll support you Training and development to help you excel in your career. Coaching and support from experts in your team A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs.

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Cloud Services Engineer Project Role Description : Act as liaison between the client and Accenture operations teams for support and escalations. Communicate service delivery health to all stakeholders and explain any performance issues or risks. Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Hold performance meetings to share performance and consumption data and trends. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : A BTech Summary : As a Cloud Services Engineer, you will be responsible for ensuring Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. Your typical day will involve acting as a liaison between the client and Accenture operations teams for support and escalations, communicating service delivery health to all stakeholders, and holding performance meetings to share performance and consumption data and trends. Roles & Responsibilities: - Act as a liaison between the client and Accenture operations teams for support and escalations. - Communicate service delivery health to all stakeholders and explain any performance issues or risks. - Ensure Cloud orchestration and automation capability is operating based on target SLAs with minimal downtime. - Hold performance meetings to share performance and consumption data and trends. Professional & Technical Skills: - Must To Have Skills:Proficiency in Python (Programming Language) with 5 years of experience. - Good To Have Skills:Experience in Cloud orchestration and automation. - Strong understanding of Cloud technologies and services. - Experience in managing and monitoring Cloud infrastructure. - Experience in troubleshooting and resolving Cloud-related issues. Additional Information: - The candidate should have a minimum of 5 years of experience in Python (Programming Language). - The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful Cloud solutions. - This position is based at our Hyderabad office. Qualifications A BTech

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark, Java, Apache Kafka Good to have skills : Google Dataproc Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your typical day will involve designing and developing data solutions, collaborating with teams to ensure data quality, and implementing ETL processes for data migration and deployment. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design and develop data solutions for data generation, collection, and processing. Create data pipelines to ensure efficient data flow. Implement ETL processes to migrate and deploy data across systems. Ensure data quality and integrity throughout the data solutions. Collaborate with cross-functional teams to gather requirements and understand data needs. Optimize data solutions for performance and scalability. Troubleshoot and resolve data-related issues. Stay up-to-date with the latest trends and technologies in data engineering. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark, Java, and Apache Kafka. Good To Have Skills:Experience with Apache Airflow, Google Dataproc Strong understanding of data engineering principles and best practices. Experience with data modeling and database design. Hands-on experience with data integration and ETL tools. Knowledge of cloud platforms and services, such as AWS or Google Cloud Platform. Experience performing analysis with large datasets in a cloud-based environment, preferably with an understanding of Google's Cloud Platform (GCP) Experience with big data technologies, such as Hadoop and Hive. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Strong experience with multiple database models ( SQL, NoSQL, OLTP and OLAP) Strong experience with Data Streaming Architecture ( Kafka, Spark, Airflow) Strong knowledge of cloud data platforms and technologies such as GCS, BigQuery, Cloud Composer, Dataproc and other cloud-native offerings Knowledge of Infrastructure as Code (IaC) and associated tools (Terraform, ansible etc) Experience pulling data from a variety of data source types including Mainframe EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series) Comfortable communicating with various stakeholders (technical and non-technical) GCP Data Engineer Certification is a nice to have Additional Information: The candidate should have a minimum of 5 years of experience in Apache Spark. This position is based in Mumbai. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

2 - 6 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact! IBMs Cloud Services are focused on supporting clients on their cloud journey across any platform to achieve their business goals. It encompasses Cloud Advisory, Architecture, Cloud Native Development, Application Portfolio Migration, Modernization, and Rationalization as well as Cloud Operations Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done. Expert in SQL can do data analysis and investigation using Sql queriesImplementation Knowledge Advance Sql functions like Regular Expressions, Aggregation, Pivoting, Ranking, Deduplication etc.BigQuery and BigQuery Transformation (using Stored Procedures) Data modelling concepts Star & Snowflake schemas, Fact & Dimension table, Joins, Cardinality etc GCP Services related to data pipelines like Workflow, Cloud Composer, Cloud Schedular, Cloud Storage etc Preferred technical and professional experience Stay updated with the latest trends and advancements in cloud technologies, frameworks, and tools. Conduct code reviews and provide constructive feedback to maintain code quality and ensure adherence to best practices. Troubleshoot and debug issues, and deploy applications to the cloud platform

Posted 3 months ago

Apply

7 - 12 years

9 - 14 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Google Cloud Data Services Good to have skills : Apache Spark Minimum 7.5 year(s) of experience is required Educational Qualification : Any Bachelors Degree Project Role :Software Development EngineerProject Role Description :Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work.Must have Skills :Google Cloud Data ServicesGood to Have Skills : Apache SparkJob Requirements :Key Responsibilities :1 Designing, implementing, and maintaining data infrastructure and pipelines on the Google Cloud Platform GCP 2 Strong knowledge of GCP services, especially Big Query, data warehouse concepts 3 Proficiency in SQL and experience with Data security and Optimization 4 Familiarity with programming languages such as Python 5 Understanding of data security and complianTechnical Experience :1 Proven experience as a Google cloud Platform Engineer, preferably with a focus on Google Cloud Platform infra, IaaC, networking, IAM 2 Strong knowledge of GCP services such as Cloud Storage, Big Query, Dataflow, Dataproc, Cloud Composer, Pub/Sub, Airflow, DAG etcProfessional Attributes :Good communicationEducational Qualification:Additional Info :Level flex, Location - only look for Bengaluru & Gurugram ACN facility. Qualifications Any Bachelors Degree

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Pune

Work from Office

Naukri logo

Job Title:GCP DevOps/Platform Engineer Corporate Title:AVP Location:Pune, India Role Description We are seeking a highly motivated and senior DevOps Engineer to join our team. The successful candidate will have at least 8-13 years of experience in the field and be proficient in Google Cloud Platform (GCP), GitHub Actions, Infrastructure as Code (IaC), and Site Reliability Engineering (SRE), CI/CD using Helm Charts, Platform engineering. The person performing the role may lead delivery of other members of the team and controls their work where applicable. What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Develop and maintain infrastructure code using IaC tools such as Terraform and Ansible. Design, implement, and optimize cloud-based applications and services on GCP. Collaborate with cross-functional teams to ensure successful delivery of projects, including frontend development, backend development, and quality assurance. Troubleshoot and resolve issues related to application performance, reliability, and security. Optimize the deployment process using automation tools such as GitHub Actions. Provide technical guidance and mentorship to junior team members. Stay up-to-date with industry trends and best practices in DevOps engineering. Design, deploy, manage and document CI/CD pipelines Routine application maintenance tasks are an ongoing responsibility of DevOps Engineers that they accomplish via strategy-building techniques. Identifies issues / optimization potentials and implements solutions Your skills and experience Understanding of industry standards processes for build, deploy, release and support (CI/CD, incident/problem /change management etc.) Experience in building dashboards for billing, utilization and monitoring infrastructure. Experience in optimizing infrastructure cost and reducing footprint. Strong understanding and working experience in managing GKE and GKE Cluster Services. Experience in GKE node management, auto scaling, secrets management, config management, virtual services, gateways, Anthos service mesh Strong knowledge of Linux, Apache Webserver, Java Application servers, Load balancers Experience with any cloud-based infrastructure (GCP/AWS/Azure), highly available and fault tolerantapplications. Our tech stack:GCP (GKE, Cloud Composer, Big Query, GCS etc), but any other public cloud experience is relevant, Kubernetes, Terraform, Confluent Kafka, GitHub Actions, Helm. Good understanding of infrastructure and platform components:Shell scripting, Python, Linux Application layer protocols (TLS/SSL, HTTP(S), DNS, etc) Experience in supporting/building Continuous Delivery pipelines Experience with deployment strategies (such as BlueGreen, Canary, A/B) Good understanding of various design and architectural patterns Good understanding of Microservices and API Management Experience in monitoring/reporting tools such as Splunk, Grafana/Prometheus/Google Cloud Operation etc Experience in Agile practices Collaboration Skills:Proactive can-do attitude; A creative approach towards solving technical problems; Able to work efficiently with colleagues in multiple locations; Willing to collaborate across domains, for efficiency in technology sharing and reuse; Excellent communication skills in English; How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 months ago

Apply

2 - 7 years

4 - 9 Lacs

Coimbatore

Work from Office

Naukri logo

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : GCP Dataflow, Google Pub/Sub, Google Dataproc Minimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a skilled GCP Data Engineer to join our dynamic team. The ideal candidate will design, build, and maintain scalable data pipelines and solutions on Google Cloud Platform (GCP). This role requires expertise in cloud-based data engineering and hands-on experience with GCP tools and services, ensuring efficient data integration, transformation, and storage for various business use cases.________________________________________ Roles & Responsibilities: Design, develop, and deploy data pipelines using GCP services such as Dataflow, BigQuery, Pub/Sub, and Cloud Storage. Optimize and monitor data workflows for performance, scalability, and reliability. Collaborate with data analysts, data scientists, and business stakeholders to understand data requirements and implement solutions. Implement data security and governance measures, ensuring compliance with industry standards. Automate data workflows and processes for operational efficiency. Troubleshoot and resolve technical issues related to data pipelines and platforms. Document technical designs, processes, and best practices to ensure maintainability and knowledge sharing.________________________________________ Professional & Technical Skills:a) Must Have: Proficiency in GCP tools such as BigQuery, Dataflow, Pub/Sub, Cloud Composer, and Cloud Storage. Expertise in SQL and experience with data modeling and query optimization. Solid programming skills in Python ofor data processing and ETL development. Experience with CI/CD pipelines and version control systems (e.g., Git). Knowledge of data warehousing concepts, ELT/ETL processes, and real-time streaming. Strong understanding of data security, encryption, and IAM policies on GCP.b) Good to Have: Experience with Dialogflow or CCAI tools Knowledge of machine learning pipelines and integration with AI/ML services on GCP. Certifications such as Google Professional Data Engineer or Google Cloud Architect.________________________________________ Additional Information: - The candidate should have a minimum of 3 years of experience in Google Cloud Machine Learning Services and overall Experience is 3- 5 years - The ideal candidate will possess a strong educational background in computer science, mathematics, or a related field, along with a proven track record of delivering impactful data-driven solutions. Qualifications 15 years full time education

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Google Dataproc, Apache Spark Good to have skills : Apache Airflow Minimum 5 year(s) of experience is required Educational Qualification : minimum 15 years of fulltime education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements using Google Dataproc. Your typical day will involve working with Apache Spark and collaborating with cross-functional teams to deliver impactful data-driven solutions. Roles & Responsibilities: Design, build, and configure applications to meet business process and application requirements using Google Dataproc. Collaborate with cross-functional teams to deliver impactful data-driven solutions. Utilize Apache Spark for data processing and analysis. Develop and maintain technical documentation for applications. Professional & Technical Skills: Strong expereince in Apache Spark and Java for Spark. Strong experience with multiple database models ( SQL, NoSQL, OLTP and OLAP) Strong experience with Data Streaming Architecture ( Kafka, Spark, Airflow) Strong knowledge of cloud data platforms and technologies such as GCS, BigQuery, Cloud Composer, Dataproc and other cloud-native offerings Knowledge of Infrastructure as Code (IaC) and associated tools (Terraform, ansible etc) Experience pulling data from a variety of data source types including Mainframe (EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series) Experience performing analysis with large datasets in a cloud-based environment, preferably with an understanding of Google's Cloud Platform (GCP) Comfortable communicating with various stakeholders (technical and non-technical) GCP Data Engineer Certification is a nice to have Additional Information: The candidate should have a minimum of 5 years of experience in Google Dataproc and Apache Spark. The ideal candidate will possess a strong educational background in software engineering or a related field. This position is based at our Mumbai office. Qualifications minimum 15 years of fulltime education

Posted 3 months ago

Apply

5 - 10 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark, Java, Google Dataproc Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will be responsible for the design and development of data solutions, collaborating with multiple teams, and providing solutions to problems for your immediate team and across multiple teams. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Design, develop, and maintain data solutions for data generation, collection, and processing. Create data pipelines to migrate and deploy data across systems. Ensure data quality by implementing ETL processes. Collaborate with multiple teams to provide solutions to data-related problems. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark, Java, Google Dataproc. Good To Have Skills:Experience with Apache Airflow. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Strong experience with multiple database models ( SQL, NoSQL, OLTP and OLAP) Strong experience with Data Streaming Architecture ( Kafka, Spark, Airflow) Strong knowledge of cloud data platforms and technologies such as GCS, BigQuery, Cloud Composer, Dataproc and other cloud-native offerings Knowledge of Infrastructure as Code (IaC) and associated tools (Terraform, ansible etc) Experience pulling data from a variety of data source types including Mainframe (EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series) Experience performing analysis with large datasets in a cloud-based environment, preferably with an understanding of Google's Cloud Platform (GCP) Comfortable communicating with various stakeholders (technical and non-technical) GCP Data Engineer Certification is a nice to have Additional Information: The candidate should have a minimum of 5 years of experience in Apache Spark. This position is based in Bengaluru. A 15 years full time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply

3 - 8 years

5 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : Apache Kafka, Apache Airflow Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across systems. You will play a crucial role in managing and optimizing data infrastructure to support business needs and enable data-driven decision-making. Roles & Responsibilities: Expected to perform independently and become an SME. Required active participation/contribution in team discussions. Contribute in providing solutions to work-related problems. Design and develop scalable and efficient data pipelines. Ensure data quality and integrity throughout the data lifecycle. Implement ETL processes to migrate and deploy data across systems. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Optimize and maintain data infrastructure to support business needs. Stay up-to-date with industry trends and best practices in data engineering. Additional Responsibility 1:Collaborate with data scientists and analysts to understand their data needs and provide the necessary infrastructure and tools. Additional Responsibility 2:Troubleshoot and resolve data-related issues in a timely manner. Professional & Technical Skills: Must To Have Skills:Proficiency in Apache Spark, Java, Google Dataproc. Good To Have Skills:Experience with Apache Airflow. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Strong experience with multiple database models ( SQL, NoSQL, OLTP and OLAP) Strong experience with Data Streaming Architecture ( Kafka, Spark, Airflow) Strong knowledge of cloud data platforms and technologies such as GCS, BigQuery, Cloud Composer, Dataproc and other cloud-native offerings Knowledge of Infrastructure as Code (IaC) and associated tools (Terraform, ansible etc) Experience pulling data from a variety of data source types including Mainframe EBCDIC), Fixed Length and delimited files, databases (SQL, NoSQL, Time-series) Experience performing analysis with large datasets in a cloud-based environment, preferably with an understanding of Google's Cloud Platform (GCP) Comfortable communicating with various stakeholders (technical and non-technical) GCP Data Engineer Certification is a nice to have Additional Information: The candidate should have a minimum of 3 years of experience in Apache Spark. This position is based in Bengaluru. A 15 years full-time education is required. Qualifications 15 years full time education

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies