Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
5 - 10 Lacs
bengaluru
Work from Office
As a member of the Support organization, your focus is to deliver post-sales support and solutions to the Oracle customer base while serving as an advocate for customer needs. This involves resolving post-sales non-technical customer inquiries via phone and electronic means, as well as, technical questions regarding the use of and troubleshooting for our Electronic Support Services. A primary point of contact for customers, you are responsible for facilitating customer relationships with Support and providing advice and assistance to internal Oracle employees on diverse customer situations and escalated issues. Career Level - IC3 Responsibilities As a Sr. Support Engineer, you will be the technical interface to customers, Original Equipment Manufacturers (OEMs) and Value-Added Resellers (VARs) for resolution of problems related to the installation, recommended maintenance and use of Oracle products. Have an understanding of all Oracle products in their competencies and in-depth knowledge of several products and/or platforms. Also, you should be highly experienced in multiple platforms and be able to complete assigned duties with minimal direction from management. In this position, you will routinely act independently while researching and developing solutions to customer issues. RESPONSIBILITIES:To manage and resolve Service Requests logged by customers (internal and external) on Oracle products and contribute to proactive support activities according to product support strategy and modelOwning and resolving problems and managing customer expectations throughout the Service Request lifecycle in accordance with global standardsWorking towards, adopting and contributing to new processes and tools (diagnostic methodology, health checks, scripting tools, etc.)Contributing to Knowledge Management content creation and maintenanceWorking with development on product improvement programs (testing, SRP, BETA programs etc) as requiredOperating within Oracle business processes and proceduresRespond and resolve customer issues within Key Performance Indicator targetsMaintaining product expertise within the teamMaintain an up-to-date and in-depth knowledge of new products released in the market for supported product QUALIFICATIONS:Bachelors degree in Computer Science, Engineering or related technical field5+ years of proven professional and technical experience in Big Data Appliance (BDA), Oracle Cloud Infrastructure (OCI), Linux OS and within areas like Cloudera distribution for Hadoop (CDH), HDFS, YARN, Spark, Hive, Sqoop, Oozie and Intelligent Data Lake.Excellent verbal and written skills in English SKILLS & COMPETENCIES:Minimum technical skills:As a member of the Big Data Appliance (BDA), the focus is to troubleshoot highly complex technical issues related to the Big Data Appliance and within areas like Cloudera distribution for Hadoop (CDH), HDFS, YARN, Spark, Hive, Sqoop, Oozie and Intelligent Data Lake.Have good hands on experience in Linux Systems, Cloudera Hadoop architecture, administration and troubleshooting skills with good knowledge of different technology products/services/processes.Responsible for resolving complex issues for BDA (Big Data Appliance) customers. This would include resolving issues pertaining to Cloudera Hadoop, Big Data SQL, BDA upgrades/patches and installs. The candidate will also collaborate with other teams like Hardware, development, ODI, Oracle R, etc to help resolve customers issues on the BDA machine. The candidate will also be responsible for interacting with customer counterparts on a regular basis and serving as the technology expert on the customers behalf.Experience in multi-tier architecture environment required.Fundamental understanding of computer networking, systems, and database technologies. Personal competencies:Desire to learn, or expand knowledge, about Oracle database and associated productsCustomer focusStructured Problem Recognition and ResolutionExperience of contributing to a shared knowledge baseExperience of Support level work, like resolving customer problems and managing customer expectations, and escalations.CommunicationPlanning and organizingWorking globallyQualityTeam WorkingResults oriented Qualifications Career Level - IC3
Posted 1 day ago
6.0 - 7.0 years
5 - 9 Lacs
pune
Work from Office
Diverse Lynx is looking for Cloudera Hadoop Administrator to join our dynamic team and embark on a rewarding career journey Collaborate with cross-functional teams to achieve strategic outcomes Apply subject expertise to support operations, planning, and decision-making Utilize tools, analytics, or platforms relevant to the job domain Ensure compliance with policies while improving efficiency and outcomes Disclaimer: This job description has been sourced from a public domain and may have been modified by Naukri.com to improve clarity for our users. We encourage job seekers to verify all details directly with the employer via their official channels before applying.
Posted 4 days ago
5.0 - 10.0 years
2 - 3 Lacs
bengaluru, karnataka, india
On-site
The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataseta true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won't find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You're good at what you do and possess the required experience to prove it. However, equally as important you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused someone who prioritizes customer success in their work. And finally, you're open and borderless naturally inclusive in how you work with others. Required Skills and Experience The candidate should have a total of 8+ years of overall experience, with approximately 5+ years of relevant experience in Cloudera. Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes Cloudera Hadoop admin or support L2 Knowledge Certification Hadoop Administration Certification RedHat Linux Administration mySQL/postgreSQL knowledge VMware Virtualization knowledge preferred Knowledge on Pure and Isilon storage, S3 file systems is added-advantage Must be able to engage with technical people but also with semi-technical senior stakeholders and customer's application team Well-versed knowledge on Network, Storage and API's creation and debugging Experience of the following would be useful but is not essential: Containerisation (Docker/ Kubernetes) System admin and tuning of Linux based servers MySQL/Cassandra/ElasticSearch Database administration Cloud platform solutions (AWS/GCP/Azure) Excellent problem-solving, analytical, and critical thinking skills Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience Experience working as a Data Engineer and/or in cloud modernization Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization Cloud platform certification, e.g., AWS Certified Data Analytics Specialty, Elastic Certified Engineer, Google Cloud Professional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate Understanding of social coding and Integrated Development Environments, e.g., GitHub and Visual Studio Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology
Posted 6 days ago
3.0 - 6.0 years
8 - 12 Lacs
mumbai, mumbai suburban, mumbai (all areas)
Work from Office
We are looking for a Hadoop Developer with hands-on experience in managing, developing data solutions on Hadoop ecosystems. The candidate should have strong technical expertise in data lake design, data pipelines, and real-time/batch data processing. Required Candidate profile 3+ Yrs in Manage & support Cloudera Hadoop on-premise clusters. Work on data modelling, governance, migrations, application development. Proficiency in Spark, Hive, Impala, Kafka, related tools. Perks and benefits To be disclosed post interview
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
india
On-site
JOB DESCRIPTION Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills & Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).
Posted 2 weeks ago
5.0 - 10.0 years
15 - 27 Lacs
hyderabad, bengaluru
Work from Office
Hi, Greetings from Preludesys India Pvt Ltd!! We are hiring for one of our prestigious clients for the below position!!! Job Opportunity: Big Data Engineer Notice Period: Immediate - 30 Days Key Responsibilities: Design, develop, and maintain data pipelines using the Cloudera Hadoop ecosystem Implement real-time data streaming solutions with Apache Kafka Work with Dataiku and Apache Spark on the Cloudera platform for advanced analytics Develop scalable data solutions using Python, PySpark, and SQL Apply strong data modeling principles to support business intelligence and analytics Mandatory Skills: Hands-on experience with the Cloudera Platform Nice-to-Have Skills: Proficiency in Data Modeling Experience with Hadoop and Spark
Posted 3 weeks ago
7.0 - 10.0 years
7 - 11 Lacs
Mumbai, Maharashtra, India
Remote
Required Skills: 4+ years of experience with SQL (MySQL) a must. 2+ years of Hands-on experience working with Cloudera Hadoop Distribution platform and Apache Spark. Strong understanding of full dev life cycle, for backend database applications across RDBMS and distributed cloud platforms. Experience as a Database developer writing SQL queries, DDL/DML statements, managing databases, writing stored procedures, triggers and functions and knowledge of DB internals. Knowledge of database administration, performance tuning, replication, backup, and data restoration. Comprehensive knowledge of Hadoop Architecture and HDFS, to design, develop, document and architect Hadoop applications. Working knowledge of SQL, NoSQL, data warehousing DBA along with Map-Reduce, Hive, Impala, Kafka, HBase, Pig, and Java. Experience processing large amounts of structured and unstructured data, extracting, and transforming data from remote data stores, such as relational databases or distributed file systems. Working expertise with Apache Spark, Spark streaming, Jupyter Notebook, Python or Scala programming. Excellent communication skills, ability to tailor technical information for different audiences. Excellent teamwork skills, ability to self-start, share insights, ask questions, and report progress. Working knowledge of the general database architectures, trends, and emerging technologies. Familiarity with caching, partitioning, storage engines, query performance tuning, indexes, and distributed computing frameworks. Working knowledge understanding of data analytics or BI tools - like looker studio, Power BI, or any other BI tools is a must. Additional Desired Skills: Added advantage if you have exposure to advance technology components like - caching techniques, load balancers, distributed logging, distributed queries, queueing engines, containerization, html/CSS optimization, mobile app web server optimization, cloud services. Strong attention to detail on every line of code, every unit test, and every commit message. Comfortable with rapid development cycles and tight schedules. Experience with Linux, GitHub, Jira is a plus. Good experience with benchmarking, optimization, and CI/CD pipeline. Experience with web paradigms such as REST, Responsive Web Design, Test-driven Development (TDD), Dependency Injection, unit testing frameworks such JUnit, etc. Bachelor s degree or higher in Computer Science with relevant skills in mobile application development and web.
Posted 1 month ago
13.0 - 20.0 years
35 - 70 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Required Skills and Experience 13+ Years is a must with 7+ years of relevant experience working on Big Data Platform technologies. Proven experience in technical skills around Cloudera, Teradata, Databricks, MS Data Fabric, Apache, Hadoop, Big Query, AWS Big Data Solutions (EMR, Redshift, Kinesis, Qlik) Good Domain Experience in BFSI or Manufacturing area . Excellent communication skills to engage with clients and influence decisions. High level of competence in preparing Architectural documentation and presentations. Must be organized, self-sufficient and can manage multiple initiatives simultaneously. Must have the ability to coordinate with other teams independently. Work with both internal/external stakeholders to identify business requirements, develop solutions to meet those requirements / build the Opportunity. Note: If you have experience in BFSI Domain than the location will be Mumbai only If you have experience in Manufacturing Domain the location will be Mumbai & Bangalore only. Interested candidates can share their updated resumes on shradha.madali@sdnaglobal.com
Posted 1 month ago
6.0 - 10.0 years
20 - 30 Lacs
Egypt, Chennai, Bengaluru
Hybrid
We're Hiring: MLOps Engineer | Cairo, Egypt | Immediate Joiners Only Share CVs to vijay.s@xebia.com Location: Cairo, Egypt Experience: 6-8 Years Mode: Onsite Joining: Immediate or Max 2 Weeks Notice Relocation: Open to relocating to Egypt ASAP Job Summary: Xebia is seeking a seasoned MLOps Engineer to scale and operationalize ML solutions for our strategic client in Cairo. This is an onsite role , perfect for professionals who are ready to deploy cutting-edge ML pipelines in real-world enterprise environments. Key Responsibilities: Design & manage end-to-end scalable, reliable ML pipelines Build CI/CD pipelines with Azure DevOps Deploy and track ML models using MLflow Work on large-scale data with Cloudera/Hadoop (Hive, Spark, HDFS) Support Knowledge Graphs , metadata enrichment, model lineage Collaborate with DS & engineering teams to ensure governance and auditability Implement model performance monitoring, drift detection, and data quality checks Support DevOps automation aligned with enterprise-grade compliance standards Required Skills: 6-8 years in MLOps / Machine Learning Engineering Hands-on with MLflow , Azure DevOps , Python Deep experience with Cloudera , Hadoop , Spark , Hive Exposure to Knowledge Graphs , containerization (Docker/Kubernetes) Familiar with TensorFlow , scikit-learn , or PyTorch Understanding of data security, access controls, audit logging Preferred: Azure Certifications (e.g., Azure Data Engineer / AI Engineer Associate ) Experience with Apache NiFi , Airflow , or similar tools Background in regulated sectors like BFSI, Healthcare, or Pharma Soft Skills: Strong problem-solving & analytical thinking Clear communication & stakeholder engagement Passion for automation & continuous improvement Additional Information: Only apply if: You can join within 2 weeks or are an immediate joiner You're open to relocating to Cairo, Egypt ASAP You hold a valid passport Visa-on-arrival/B1/Schengen holders from MEA region preferred To Apply: Send your updated CV to vijay.s@xebia.com along with: Full Name Total Experience Current CTC Expected CTC Current Location Preferred Xebia Location (Cairo) Notice Period / Last Working Day (if serving) Primary Skills LinkedIn Profile Valid Passport No Be part of a global transformation journey make AI work at scale! #MLOps #Hiring #AzureDevOps #MLflow #CairoJobs #ImmediateJoiners #DataEngineering #Cloudera #Hadoop #XebiaCareers
Posted 1 month ago
2.0 - 7.0 years
5 - 12 Lacs
Pune
Work from Office
Job Responsiblities: About the Role We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Desired Skill: Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger, Strong Linux fundamentals and scripting (Python, Shell), Experience with Apache NiFi, Airflow, Yarn, and Zookeeper, Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki, Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines, Strong SQL skills (Oracle/Exadata preferred), Familiarity with DataHub, DataMesh, and security best practices is a plus
Posted 2 months ago
4.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
Role & responsibilities About the Role We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Preferred candidate profile Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger, Strong Linux fundamentals and scripting (Python, Shell), Experience with Apache NiFi, Airflow, Yarn, and Zookeeper, Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki, Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines, Strong SQL skills (Oracle/Exadata preferred), Familiarity with DataHub, DataMesh, and security best practices is a plus
Posted 2 months ago
5.0 - 9.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Job Title- PySpark Data Engineer We're growing our Data Engineering team at ValueLabs and looking for a talented individual to build scalable data pipelines on Cloudera Data Platform! Experience- 5years to 9years. Pyspark Job Description: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Qualifications Education and Experience Bachelors or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux.
Posted 2 months ago
2.0 - 8.0 years
2 - 8 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Dynamic Yield, a Mastercard company, is seeking a Senior Data Scientist to join our Product Data & Analytics team . This team builds crucial internal analytic partnerships, focusing on business health, portfolio and revenue optimization opportunities, initiative tracking, and new product development/Go-To-Market strategies. We are a hands-on global team providing scalable end-to-end data solutions, deeply influencing Mastercard's decisions through data-driven insights. Are you excited by the immense value data assets bring to an organization Are you an evangelist for data-driven decision-making, motivated to build large-scale analytical capabilities for end-users across six continents, and aspire to be the go-to resource for data analytics within a global corporation If you have a knack for seeing solutions in sprawling datasets and the business mindset to convert insights into strategic opportunities, we want to hear from you. Role & Responsibilities As a Senior Data Scientist, you will: Data Solution Architecture & Development: Work closely with global & regional teams to architect, develop, and maintain data engineering, advanced reporting, and data visualization capabilities on large volumes of data. This will support analytics and reporting needs across various products, markets, and services. Data Analysis & Triangulation: Obtain data from multiple sources, collate, analyze, and triangulate information to develop reliable fact bases. Effectively use tools to manipulate large-scale databases, synthesizing data insights. Strategic Insights & Optimization: Execute cross-functional projects using advanced modeling and analysis techniques to discover insights that will guide strategic decisions and uncover optimization opportunities. Reporting & Dashboarding: Build, develop, and maintain data models, reporting systems, dashboards (e.g., Tableau/PowerBI), and performance metrics that support key business decisions. Intellectual Capital & Best Practices: Extract intellectual capital from engagement work and actively share tools, methods, and best practices across projects. Data Presentation: Provide first-level insights, conclusions, and assessments, presenting findings via Tableau/PowerBI dashboards, Excel, and PowerPoint. Data Quality: Apply quality control, data validation, and cleansing processes to new and existing data sources. Mentorship: Lead, mentor, and guide more junior team members, fostering their growth and development. Stakeholder Communication: Communicate results and business impacts of insight initiatives to stakeholders across leadership, technology, sales, marketing, and product teams. All About You Experience: Proven experience in data management, data mining, data analytics, data reporting, data product development, and quantitative analysis . Industry Knowledge (Plus): Experience within a Financial Institution or the Payments industry is a plus. Data Presentation: Experience presenting data findings in a readable and insight-driven format, including building support decks. SQL Skills: Advanced SQL skills , with the ability to write optimized queries for large datasets (Big Data). Platforms/Environments: Experience on platforms/environments such as Cloudera Hadoop, Big Data technology stack, SQL Server, and Microsoft BI Stack . Data Visualization: Experience with data visualization tools such as Looker, Tableau, and/or PowerBI . Programming (Plus): Experience with Python, R, and Databricks is a plus. Microsoft BI Stack (Advantage): Experience on SQL Server Integration Services (SSIS), SQL Server Analysis Services (SSAS), and SQL Server Reporting Services (SSRS) will be an added advantage. Problem Solving: Excellent problem-solving, quantitative, and analytical skills. Technical Acumen: In-depth technical knowledge, drive, and the ability to learn new technologies. Attention to Detail: Strong attention to detail and a commitment to quality. Teamwork & Communication: A strong team player with excellent communication (oral/written) skills. Stakeholder Interaction: Must be able to interact effectively with management and internal stakeholders to collect requirements. Adaptability: Must be able to perform effectively in a team, use sound judgment, and operate under ambiguity. Self-Motivation: Self-motivated, operating with a sense of urgency. Education Bachelor's or Master's Degree in Computer Science, Information Technology, Engineering, Mathematics, or Statistics. Additional Competencies Excellent English, quantitative, technical, and communication (oral/written) skills. Analytical/Problem Solving. Strong attention to detail and quality. Creativity/Innovation. Self-motivated, operates with a sense of urgency. Project Management/Risk Mitigation. Able to prioritize and perform multiple tasks simultaneously.
Posted 3 months ago
2.0 - 7.0 years
2 - 7 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Dynamic Yield, a Mastercard company, is seeking an Associate Analyst - Product Data & Analytics to join our dynamic global team. The Product Data & Analytics team builds internal analytic partnerships, focusing on business health, portfolio and revenue optimization, initiative tracking, and new product development/Go-To-Market strategies. We are a hands-on global team of analytics engineers, data architects, BI developers, data analysts, and data scientists, fully managing our own data assets and solutions to provide scalable end-to-end data solutions. If you're passionate about the value of data assets, an evangelist for data-driven decision-making, and motivated to build large-scale analytical capabilities supporting users across six continents, this role is for you. The ideal candidate has a knack for seeing solutions in sprawling datasets and the business acumen to convert insights into strategic opportunities for our company. Role and Responsibilities As an Associate Analyst - Product Data & Analytics, you will: Data Platform Development: Be part of a strategic initiative to create a Single Source of Truth (SSOT) data platform for all transactional data assets within the organization. Data Model Design: Work alongside analytics engineers, data analysts, and data engineers to evaluate current use cases, define the data platform design including the logical/conceptual data model, data mappings, and other platform documentation. Collaboration & Build: Collaborate with data architects and data engineers to ensure platform build, and be responsible for User Acceptance Testing (UAT) before implementation. Requirements & Design: Collaborate with team members to collect business requirements, define successful analytics outcomes, and design data models. Data Ownership: Serve as the Directly Responsible Individual (DRI) for major sections of the platform's logical/conceptual data model. Documentation: Define data mappings, data dictionaries, data quality, and UAT documentation. Data Catalog: Maintain the Data Catalog, a scalable resource to support Self-Service and Single-Source-of-Truth analytics. Technical Specifications: Translate business requirements into tangible technical solution specifications and high-quality, on-time deliverables. Data Manipulation & Quality: Effectively use tools to manipulate large-scale databases, synthesizing data insights. Apply quality control, data validation, and cleansing processes to new and existing data sources. DataOps & Code Standards: Implement the DataOps philosophy in all your work. Craft code that meets our internal standards for style, maintainability, and best practices for a high-scale database environment. Maintain and advocate for these standards through code review. Cross-functional Collaboration: Collaborate with cross-functional teams, external vendor teams, and technology suppliers to ensure the delivery of high-quality services. All About You Experience: 2+ years of experience in data analysis, data mining, data analytics, data reporting, and data product development. Industry Knowledge (Plus): Financial Institution or Payments experience is a plus. Proactive & Driven: Proactive self-starter, actively seeking initiatives to advance. Data Architecture: Understanding of Data architecture and some experience in building logical/conceptual data models or creating data mapping documentation. Data Quality: Experience with data validation, quality control, and cleansing processes for new and existing data sources. SQL Skills: Advanced SQL skills, with the ability to write optimized queries for large datasets. Platforms/Environments: Experience on Platforms/Environments such as Cloudera Hadoop, Big Data technology stack, SQL Server, Microsoft BI Stack, Cloud . Programming Exposure (Plus): Exposure to Python, Scala, Spark, Cloud , and other related technologies is a plus. Data Visualization (Plus): Experience with data visualization tools such as Tableau, Domo, and/or PowerBI is a plus. Problem Solving: Excellent problem-solving, quantitative, and analytical skills. Technical Aptitude: In-depth technical knowledge, drive, and the ability to learn new technologies. Detail & Quality: Strong attention to detail and quality. Teamwork & Communication: Strong team player with excellent communication skills. Interpersonal Skills: Must be able to interact effectively with management and internal stakeholders to collect requirements. Adaptability: Must be able to perform effectively in a team, use sound judgment, and operate under ambiguity.
Posted 3 months ago
5.0 - 10.0 years
4 - 8 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action
Posted 3 months ago
8.0 - 13.0 years
22 - 37 Lacs
Pune
Hybrid
Role & responsibilities Role - Hadoop Admin + Automation Experience 8+ yrs Grade AVP Location - Pune Mandatory Skills : Hadoop Admin, Automation (Shell scripting/ any programming language Java/Python), Cloudera / AWS/Azure/GCP Good to have : DevOps tools Primary focus will be on candidates with Hadoop admin & Automation experience,
Posted 3 months ago
4.0 - 9.0 years
4 - 9 Lacs
Pune, Maharashtra, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 3 months ago
4.0 - 9.0 years
4 - 9 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 3 months ago
4.0 - 9.0 years
4 - 9 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)
Posted 3 months ago
4.0 - 9.0 years
5 - 8 Lacs
Gurugram
Work from Office
Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred)
Posted 3 months ago
5 - 6 years
7 - 8 Lacs
Gurugram
Work from Office
Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action
Posted 4 months ago
12 - 16 years
35 - 40 Lacs
Bengaluru
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 4 months ago
12 - 16 years
35 - 40 Lacs
Chennai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 4 months ago
12 - 16 years
35 - 40 Lacs
Mumbai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 4 months ago
12 - 16 years
35 - 40 Lacs
Kolkata
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg
Posted 4 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |