Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
9 - 13 Lacs
gurugram, bengaluru
Work from Office
Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP Roles and Responsibilities Skills : Bigdata, Pyspark, Hive, Spark Optimization Good to have : GCP
Posted Date not available
6.0 - 8.0 years
10 - 15 Lacs
hyderabad
Hybrid
Key Responsibilities: Design, build, and optimize large-scale data processing systems using distributed computing frameworks like Hadoop, Spark, and Kafka . Develop and maintain data pipelines (ETL/ELT) to support analytics, reporting, and machine learning use cases. Integrate data from multiple sources (structured and unstructured) and ensure data quality and consistency. Collaborate with cross-functional teams to understand data needs and deliver data-driven solutions. Implement data governance, data security, and privacy best practices. Monitor performance and troubleshoot issues across data infrastructure. Stay updated with the latest trends and technologies in big data and cloud computi...
Posted Date not available
10.0 - 13.0 years
30 - 40 Lacs
pune
Work from Office
Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kaf...
Posted Date not available
4.0 - 7.0 years
12 - 16 Lacs
hyderabad, bengaluru, india
Hybrid
Exp - 4 to 7 Yrs Loc - Bangalore or Hyderabad Posi - Permanent FTE Must Have Skills - Python, PySpark, Hadoop, Hive, Big Data Technologies, Scala, SQL, Airflow, Kafka Required Candidate profile Looking for strong Immediate Joiners or candidates with last working date in August from Hyderabad or Bangalore or South India for Big Data Developer positions for a US banking major client.
Posted Date not available
6.0 - 10.0 years
25 - 40 Lacs
noida
Work from Office
Data Pipeline Development: Design and develop efficient big data pipelines (batch as well as streaming) using Apache Spark, Trino, ensuring timely and accurate data delivery. Collaboration and Communication: Work closely with data scientists, analysts, and stakeholders to understand data requirements and perform exploratory data analysis to recommend best feature attributes, data models for AI model training and deliver highquality data solutions. Data Exploration: Analyse customer data and patterns and suggest use cases in BFSI like Instights, AI & GenAI use cases. Data Quality and Security: Ensure data quality, integrity, and security across all data platforms, maintaining robust data gov...
Posted Date not available
6.0 - 10.0 years
18 - 32 Lacs
pune, chennai
Work from Office
Mandatory - Experience and knowledge in designing, implementing, and managing non-relational data stores (e.g., MongoDB, Cassandra, DynamoDB), focusing on flexible schema design, scalability, and performance optimization for handling large volumes of unstructured or semi-structured data. Mainly client needs No SQL DB, either MongoDB or HBase Data Pipeline Development: Design, develop, test, and deploy robust, high-performance, and scalable ETL/ELT data pipelines using Scala and Apache Spark to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. Big Data Expertise: Leverage expertise in the Hadoop ecosystem (HDFS, Hive, etc.) and distributed ...
Posted Date not available
2.0 - 5.0 years
4 - 7 Lacs
pune
Work from Office
about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZSs Platform Development team designs, implements, tests and supports ZSs ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do: As part of our full-s...
Posted Date not available
4.0 - 6.0 years
32 - 35 Lacs
bengaluru
Work from Office
Overview Annalect is currently seeking a data engineering lead to join our technology team. In this role you will be building data pipelines and developing data set processes. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will lead teams working on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions. You will also help us drive the vision forward for data engineering projects by helping us exte...
Posted Date not available
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
123151 Jobs | Dublin
Wipro
40198 Jobs | Bengaluru
EY
32154 Jobs | London
Accenture in India
29674 Jobs | Dublin 2
Uplers
24333 Jobs | Ahmedabad
Turing
22774 Jobs | San Francisco
IBM
19350 Jobs | Armonk
Amazon.com
18945 Jobs |
Accenture services Pvt Ltd
18931 Jobs |
Capgemini
18788 Jobs | Paris,France