Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
13.0 - 20.0 years
35 - 70 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Required Skills and Experience 13+ Years is a must with 7+ years of relevant experience working on Big Data Platform technologies. Proven experience in technical skills around Cloudera, Teradata, Databricks, MS Data Fabric, Apache, Hadoop, Big Query, AWS Big Data Solutions (EMR, Redshift, Kinesis, Qlik) Good Domain Experience in BFSI or Manufacturing area . Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams independently. Work with both internal/external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Note: If you have experience in BFSI Domain than the location will be Mumbai only If you have experience in Manufacturing Domain the location will be Mumbai & Bangalore only. Interested candidates can share their updated resumes on shradha.madali@sdnaglobal.com
Posted 1 week ago
6.0 - 10.0 years
20 - 30 Lacs
Egypt, Chennai, Bengaluru
Hybrid
We're Hiring: MLOps Engineer | Cairo, Egypt | Immediate Joiners Only Share CVs to vijay.s@xebia.com Location: Cairo, Egypt Experience: 6-8 Years Mode: Onsite Joining: Immediate or Max 2 Weeks Notice Relocation: Open to relocating to Egypt ASAP Job Summary: Xebia is seeking a seasoned MLOps Engineer to scale and operationalize ML solutions for our strategic client in Cairo. This is an onsite role , perfect for professionals who are ready to deploy cutting-edge ML pipelines in real-world enterprise environments. Key Responsibilities: Design & manage end-to-end scalable, reliable ML pipelines Build CI/CD pipelines with Azure DevOps Deploy and track ML models using MLflow Work on large-scale data with Cloudera/Hadoop (Hive, Spark, HDFS) Support Knowledge Graphs , metadata enrichment, model lineage Collaborate with DS & engineering teams to ensure governance and auditability Implement model performance monitoring, drift detection, and data quality checks Support DevOps automation aligned with enterprise-grade compliance standards Required Skills: 6-8 years in MLOps / Machine Learning Engineering Hands-on with MLflow , Azure DevOps , Python Deep experience with Cloudera , Hadoop , Spark , Hive Exposure to Knowledge Graphs , containerization (Docker/Kubernetes) Familiar with TensorFlow , scikit-learn , or PyTorch Understanding of data security, access controls, audit logging Preferred: Azure Certifications (e.g., Azure Data Engineer / AI Engineer Associate ) Experience with Apache NiFi , Airflow , or similar tools Background in regulated sectors like BFSI, Healthcare, or Pharma Soft Skills: Strong problem-solving & analytical thinking Clear communication & stakeholder engagement Passion for automation & continuous improvement Additional Information: Only apply if: You can join within 2 weeks or are an immediate joiner You're open to relocating to Cairo, Egypt ASAP You hold a valid passport Visa-on-arrival/B1/Schengen holders from MEA region preferred To Apply: Send your updated CV to vijay.s@xebia.com along with: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Cairo) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Valid Passport No Be part of a global transformation journey make AI work at scale! #MLOps #Hiring #AzureDevOps #MLflow #CairoJobs #ImmediateJoiners #DataEngineering #Cloudera #Hadoop #XebiaCareers
Posted 1 week ago
2.0 - 7.0 years
5 - 12 Lacs
Pune
Work from Office
Job Responsiblities: About the Role We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Desired Skill: Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger, Strong Linux fundamentals and scripting (Python, Shell), Experience with Apache NiFi, Airflow, Yarn, and Zookeeper, Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki, Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines, Strong SQL skills (Oracle/Exadata preferred), Familiarity with DataHub, DataMesh, and security best practices is a plus
Posted 2 weeks ago
4.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
Role & responsibilities About the Role We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Preferred candidate profile Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger, Strong Linux fundamentals and scripting (Python, Shell), Experience with Apache NiFi, Airflow, Yarn, and Zookeeper, Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki, Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines, Strong SQL skills (Oracle/Exadata preferred), Familiarity with DataHub, DataMesh, and security best practices is a plus
Posted 2 weeks ago
5.0 - 9.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Job Title- PySpark Data Engineer We're growing our Data Engineering team at ValueLabs and looking for a talented individual to build scalable data pipelines on Cloudera Data Platform! Experience- 5years to 9years. Pyspark Job Description: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Qualifications Education and Experience Bachelors or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux.
Posted 1 month ago
2.0 - 8.0 years
2 - 8 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Dynamic Yield, a Mastercard company, is seeking a Senior Data Scientist to join our Product Data & Analytics team . This team builds crucial internal analytic partnerships, focusing on business health, portfolio and revenue optimization opportunities, initiative tracking, and new product development/Go-To-Market strategies. We are a hands-on global team providing scalable end-to-end data solutions, deeply influencing Mastercard's decisions through data-driven insights. Are you excited by the immense value data assets bring to an organization Are you an evangelist for data-driven decision-making, motivated to build large-scale analytical capabilities for end-users across six continents, and aspire to be the go-to resource for data analytics within a global corporation If you have a knack for seeing solutions in sprawling datasets and the business mindset to convert insights into strategic opportunities, we want to hear from you. Role & Responsibilities As a Senior Data Scientist, you will: Data Solution Architecture & Development: Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting, and data visualization capabilities on large volumes of data. This will support analytics and reporting needs across various products, markets, and services. Data Analysis & Triangulation: Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Strategic Insights & Optimization: Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Reporting & Dashboarding: Build, develop, and maintain data models, reporting systems, dashboards (e.g., Tableau/PowerBI), and performance metrics that support key business decisions. Intellectual Capital & Best Practices: Extract intellectual capital from engagement work and actively share tools, methods, and best practices across projects. Data Presentation: Provide first-level insights, conclusions, and assessments, presenting findings via Tableau/PowerBI dashboards, Excel, and PowerPoint. Data Quality: Apply quality control, data validation, and cleansing processes to new and existing data sources. Mentorship: Lead, mentor, and guide more junior team members, fostering their growth and development. Stakeholder Communication: Communicate results and business impacts of insight initiatives to stakeholders across leadership, technology, sales, marketing, and product teams. All About You Experience: Proven experience in data management, data mining, data analytics, data reporting, data product development, and quantitative analysis . Industry Knowledge (Plus): Experience within a Financial Institution or the Payments industry is a plus. Data Presentation: Experience presenting data findings in a readable and insight-driven format, including building support decks. SQL Skills: Advanced SQL skills , with the ability to write optimized queries for large datasets (Big Data). Platforms/Environments: Experience on platforms/environments such as Cloudera Hadoop, Big Data technology stack, SQL Server, and Microsoft BI Stack . Data Visualization: Experience with data visualization tools such as Looker, Tableau, and/or PowerBI . Programming (Plus): Experience with Python, R, and Databricks is a plus. Microsoft BI Stack (Advantage): Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS) will be an added advantage. Problem Solving: Excellent problem-solving, quantitative, and analytical skills. Technical Acumen: In-depth technical knowledge, drive, and the ability to learn new technologies. Attention to Detail: Strong attention to detail and a commitment to quality. Teamwork & Communication: A strong team player with excellent communication (oral/written) skills. Stakeholder Interaction: Must be able to interact effectively with management and internal stakeholders to collect requirements. Adaptability: Must be able to perform effectively in a team, use sound judgment, and operate under ambiguity. Self-Motivation: Self-motivated, operating with a sense of urgency. Education Bachelor's or Master's Degree in Computer Science, Information Technology, Engineering, Mathematics, or Statistics. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills. Analytical/Problem Solving. Strong attention to detail and quality. Creativity/Innovation. Self-motivated, operates with a sense of urgency. Project Management/Risk Mitigation. Able to prioritize and perform multiple tasks simultaneously.
Posted 1 month ago
2.0 - 7.0 years
2 - 7 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Dynamic Yield, a Mastercard company, is seeking an Associate Analyst - Product Data & Analytics to join our dynamic global team. The Product Data & Analytics team builds internal analytic partnerships, focusing on business health, portfolio and revenue optimization, initiative tracking, and new product development/Go-To-Market strategies. We are a hands-on global team of analytics engineers, data architects, BI developers, data analysts, and data scientists, fully managing our own data assets and solutions to provide scalable end-to-end data solutions. If you're passionate about the value of data assets, an evangelist for data-driven decision-making, and motivated to build large-scale analytical capabilities supporting users across six continents, this role is for you. The ideal candidate has a knack for seeing solutions in sprawling datasets and the business acumen to convert insights into strategic opportunities for our company. Role and Responsibilities As an Associate Analyst - Product Data & Analytics, you will: Data Platform Development: Be part of a strategic initiative to create a Single Source of Truth (SSOT) data platform for all transactional data assets within the organization. Data Model Design: Work alongside analytics engineers, data analysts, and data engineers to evaluate current use cases, define the data platform design including the logical/conceptual data model, data mappings, and other platform documentation. Collaboration & Build: Collaborate with data architects and data engineers to ensure platform build, and be responsible for User Acceptance Testing (UAT) before implementation. Requirements & Design: Collaborate with team members to collect business requirements, define successful analytics outcomes, and design data models. Data Ownership: Serve as the Directly Responsible Individual (DRI) for major sections of the platform's logical/conceptual data model. Documentation: Define data mappings, data dictionaries, data quality, and UAT documentation. Data Catalog: Maintain the Data Catalog, a scalable resource to support Self-Service and Single-Source-of-Truth analytics. Technical Specifications: Translate business requirements into tangible technical solution specifications and high-quality, on-time deliverables. Data Manipulation & Quality: Effectively use tools to manipulate large-scale databases, synthesizing data insights. Apply quality control, data validation, and cleansing processes to new and existing data sources. DataOps & Code Standards: Implement the DataOps philosophy in all your work. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Cross-functional Collaboration: Collaborate with cross-functional teams, external vendor teams, and technology suppliers to ensure the delivery of high-quality services. All About You Experience: 2+ years of experience in data analysis, data mining, data analytics, data reporting, and data product development. Industry Knowledge (Plus): Financial Institution or Payments experience is a plus. Proactive & Driven: Proactive self-starter, actively seeking initiatives to advance. Data Architecture: Understanding of Data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Data Quality: Experience with data validation, quality control, and cleansing processes for new and existing data sources. SQL Skills: Advanced SQL skills, with the ability to write optimized queries for large datasets. Platforms/Environments: Experience on Platforms/Environments such as Cloudera Hadoop, Big Data technology stack, SQL Server, Microsoft BI Stack, Cloud . Programming Exposure (Plus): Exposure to Python, Scala, Spark, Cloud , and other related technologies is a plus. Data Visualization (Plus): Experience with data visualization tools such as Tableau, Domo, and/or PowerBI is a plus. Problem Solving: Excellent problem-solving, quantitative, and analytical skills. Technical Aptitude: In-depth technical knowledge, drive, and the ability to learn new technologies. Detail & Quality: Strong attention to detail and quality. Teamwork & Communication: Strong team player with excellent communication skills. Interpersonal Skills: Must be able to interact effectively with management and internal stakeholders to collect requirements. Adaptability: Must be able to perform effectively in a team, use sound judgment, and operate under ambiguity.
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action
Posted 2 months ago
8.0 - 13.0 years
22 - 37 Lacs
Pune
Hybrid
Role & responsibilities Role - Hadoop Admin + Automation Experience 8+ yrs Grade AVP Location - Pune Mandatory Skills : Hadoop Admin, Automation (Shell scripting/ any programming language Java/Python), Cloudera / AWS/Azure/GCP Good to have : DevOps tools Primary focus will be on candidates with Hadoop admin & Automation experience,
Posted 2 months ago
4.0 - 9.0 years
4 - 9 Lacs
Pune, Maharashtra, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 2 months ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 2 months ago
4.0 - 9.0 years
4 - 9 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 2 months ago
4.0 - 9.0 years
5 - 8 Lacs
Gurugram
Work from Office
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred)
Posted 2 months ago
5 - 6 years
7 - 8 Lacs
Gurugram
Work from Office
Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action
Posted 2 months ago
12 - 16 years
35 - 40 Lacs
Bengaluru
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 40 Lacs
Chennai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 40 Lacs
Mumbai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
12 - 16 years
35 - 40 Lacs
Kolkata
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough