Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 6.0 years
5 - 9 Lacs
chennai
Work from Office
We are looking for a skilled Hadoop Developer with 3 to 6 years of experience to join our team in Chennai. The ideal candidate will have expertise in Scala, SQL, Unix, and Hive & Spark. Roles and Responsibility Design, develop, and implement scalable data processing solutions using Hadoop technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines using Scala and Spark. Troubleshoot and resolve complex technical issues related to Hadoop applications. Participate in code reviews and contribute to improving overall code quality. Stay up-to-date with the latest trends and technologies in Hadoop developmen...
Posted 1 month ago
3.0 - 7.0 years
4 - 8 Lacs
bengaluru
Work from Office
Immediate Openings onPyspark Experience : 5+ Skill:-Pyspark Location :- Bangalore Notice Period :- Immediate. Pyspark Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline
Posted 1 month ago
4.0 - 9.0 years
15 - 30 Lacs
gurugram
Work from Office
Role Description As a Senior Cloud Data Platform (AWS) Specialist at Incedo, you will be responsible for designing, deploying and maintaining cloud-based data platforms on the AWS platform. You will work with data engineers, data scientists and business analysts to understand business requirements and design scalable, reliable and cost-effective solutions that meet those requirements. Roles & Responsibilities: Designing, developing and deploying cloud-based data platforms using Amazon Web Services (AWS) Integrating and processing large amounts of structured and unstructured data from various sources Implementing and optimizing ETL processes and data pipelines Developing and maintaining secur...
Posted 1 month ago
5.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
Role Overview: You will be responsible for supporting process delivery by ensuring the daily performance of the Production Specialists, resolving technical escalations, and developing technical capability within the Production Specialists. Key Responsibilities: - Possess expertise in Azure Data Factory as a primary skill and Azure Data bricks Spark (PySpark, SQL) - Must-have skills include being cloud certified in one of the following categories: Azure Data Engineer, Azure Data Factory, Azure Data bricks Spark (PySpark or Scala), SQL, DATA Ingestion, Curation, Semantic Modelling/Optimization of the data model to work within Rahona - Experience in Azure ingestion from on-prem sources such as ...
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
bengaluru
Work from Office
Looking to onboard a skilled Hadoop Admin with 5-12 years of experience to join our team in Bangalore. The ideal candidate will have a strong background in Big Data technologies and programming languages such as Python, Scala. Roles and Responsibility Design, develop, and implement scalable data processing systems using Hadoop and Spark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Troubleshoot and resolve complex technical issues related to data processing and storage. Ensure high availability and performance of data systems and applications. Participate in code reviews and contrib...
Posted 1 month ago
6.0 - 10.0 years
30 - 35 Lacs
bengaluru
Work from Office
Design and implement an end-to-end data solution architecture tailored to enterprise analytics and operational needs. Build, maintain, and optimize data pipelines and transformations using Python, SQL, and Spark . Manage large-scale data storage and processing with Iceberg, Hadoop, and HDFS . Develop and maintain dbt models to ensure clean, reliable, and well-structured data. Implement robust data ingestion processes , integrating with third-party APIs and on-premise systems. Collaborate with cross-functional teams to align on business goals and technical requirements. Contribute to documentation and continuously improve engineering and data quality processes. Qualifications 4+ years of expe...
Posted 1 month ago
6.0 - 11.0 years
10 - 20 Lacs
hyderabad
Hybrid
JD:- Total Yrs. of Experience* 6+ Relevant Yrs. of experience* 6+ Detailed JD *(Roles and Responsibilities) 6+ years experience in Kafka, Spark, Scala ,Pyspark Strong knowledge of implementing and maintaining scalable streaming and batch data solutions with high throughput and strict SLA. Extensive experience of working in production with Kafka, Spark and Kafka Stream including best practices, observability, optimization, performance tuning. Knowledge of Python and Scala including best practices, architecture, dependency management Collaborate with cross-functional teams to understand data requirements and integrate data from multiple sources, ensuring data consistency and quality Mandatory ...
Posted 1 month ago
6.0 - 11.0 years
5 - 8 Lacs
bengaluru
Work from Office
Skill:Data Engineer Notice Period:Immediate. Employment Type: Contract Job Description : Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture .
Posted 1 month ago
6.0 - 11.0 years
6 - 9 Lacs
hyderabad, pune
Work from Office
At least 8 + years of experience in any of the ETL tools Prophecy, Datastage 11.5/11.7, Pentaho.. etc . At least 3 years of experience in Pyspark with GCP (Airflow, Dataproc, Big query) capable of configuring data pipelines . Strong Experience in writing complex SQL queries to perform data analysis on Databases SQL server, Oracle, HIVE etc . Possess the following technical skills SQL, Python, Pyspark, Hive, ETL, Unix, Control-M (or similar scheduling tools ) Ability to work independently on specialized assignments within the context of project deliverables Take ownership of providing solutions and tools that iteratively increase engineering efficiencies . Design should help embed standard pr...
Posted 1 month ago
5.0 - 10.0 years
6 - 10 Lacs
hyderabad
Hybrid
Big data engineer/Developer Spark-Scala HQL, Hive Control-m Jinkins Git Technical analysis and up to some extent business analysis (knowledge about banking products, credit cards and its transactions) Notice Period: Immediate . Employment Type: Contract
Posted 1 month ago
5.0 - 10.0 years
3 - 6 Lacs
kolkata, hyderabad, bengaluru
Work from Office
We are looking for skilled Hadoop Developers with 5-10 years of experience to join our team in Bangalore, Kolkata, Hyderabad, and Pune. The ideal candidate should have strong proficiency in Hadoop, Scala, Spark, and SQL. Roles and Responsibility Design, develop, and implement scalable data processing systems using Hadoop. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines using Spark and Scala. Troubleshoot and resolve complex technical issues related to Hadoop. Participate in code reviews and ensure high-quality code standards. Stay updated with the latest trends and technologies in Hadoop development. Job...
Posted 1 month ago
4.0 - 7.0 years
2 - 5 Lacs
bengaluru
Work from Office
We are looking for a skilled Data Engineer with 4 to 7 years of experience to join our team in Bangalore. The ideal candidate will have expertise in Python, SQL, AWS, Spark, and Big Data processing frameworks. Roles and Responsibility Design and develop scalable data pipelines using Python, Scala, or Java. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems using Hadoop and Big Data processing frameworks. Ensure data quality and integrity by implementing robust testing and validation procedures. Optimize data storage and retrieval processes for improved performance and efficiency. Participate in code reviews t...
Posted 1 month ago
7.0 - 10.0 years
2 - 6 Lacs
bengaluru
Work from Office
We are looking for a skilled professional with 7-10 years of experience to join our team as a Stream Sets expert in Bangalore. Roles and Responsibility Design, develop, and implement Stream Sets solutions to meet business requirements. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain Unix scripts for automation and data processing. Conduct RCA analysis to troubleshoot issues and optimize system performance. Set up SSL certificates and manage key management systems. Provide technical support and training to junior team members. Job Requirements Strong knowledge of StreamSets administration, Unix scripting, and RCA analysis. Experien...
Posted 1 month ago
5.0 - 10.0 years
4 - 8 Lacs
chennai, gurugram, bengaluru
Work from Office
We are looking for a skilled professional with 5-10 years of experience to join our team as a GCP Databases expert in Bangalore, Hyderabad, Chennai, Pune, Noida, and Gurgaon. The ideal candidate will have a strong background in GCP migration, database migration, and native GCP databases. Roles and Responsibility Design and implement scalable and efficient GCP databases for large-scale applications. Develop and maintain database architectures using CloudSQL, MySQL, PostgreSQL, and SQL Server. Collaborate with cross-functional teams to identify and prioritize database requirements. Ensure high availability and performance of GCP databases through monitoring and optimization techniques. Impleme...
Posted 1 month ago
8.0 - 13.0 years
3 - 7 Lacs
hyderabad, chennai, bengaluru
Work from Office
We are looking for a skilled professional with 8 to 18 years of experience in IT, specifically in Pega and CDH components, to join our team. The ideal candidate will have a strong background in building and updating Decisioning components, as well as Analytical models (Adaptive, Predictive, VBD). This position is available in PAN India, including Bangalore, Hyderabad, Chennai, Pune, and Noida. Roles and Responsibility Design and develop Pega applications using various components such as Rules Process Commander, Business Data Objects, and User Interface Components. Build and update Decisioning components to meet business requirements. Collaborate with cross-functional teams to identify and pr...
Posted 1 month ago
4.0 - 6.0 years
2 - 5 Lacs
hyderabad, chennai, bengaluru
Work from Office
We are looking for a skilled Data Engineer with 4 to 7 years of experience to join our team in Bangalore, Hyderabad, Chennai, and Pune. The ideal candidate will have expertise in Python, Pyspark, SQL, Hadoop, and NoSQL databases. Roles and Responsibility Design and develop scalable data pipelines using Azure ML, Azure Data Factory, and MLOps. Collaborate with cross-functional teams to integrate data from various sources into a unified platform. Develop and maintain large-scale data warehouses using Hadoop and NoSQL databases. Implement CI/CD pipelines for automated testing and deployment of data engineering projects. Troubleshoot and resolve complex technical issues related to data engineeri...
Posted 1 month ago
3.0 - 8.0 years
3 - 6 Lacs
hyderabad, pune
Work from Office
We are looking for a skilled Pyspark Data Engineer with 3 to 5 years of experience. The ideal candidate will have expertise in Hadoop, Spark/Pyspark, Hive, YARN, Scala development, and data modeling. This position is based in Pune and Hyderabad. Roles and Responsibility Design and develop scalable data pipelines using Pyspark and Spark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems and architectures. Ensure data quality and integrity by implementing robust testing and validation procedures. Participate in code reviews and contribute to improving overall code quality. Troubleshoot and resolve complex tech...
Posted 1 month ago
5.0 - 10.0 years
2 - 5 Lacs
chennai, coimbatore, bengaluru
Work from Office
We are looking for a skilled professional with 5-10 years of experience to join our team as a Databricks expert in Pune, Chennai, Coimbatore, and Bangalore. Roles and Responsibility Design and develop data pipelines using Azure Data Bricks and Azure Data Factory. Collaborate with cross-functional teams to integrate data from various sources into a unified data warehouse. Develop and maintain ETL processes using PySpark and SQL. Implement data modeling principles to optimize data storage and retrieval. Troubleshoot and resolve issues related to data quality and performance. Ensure compliance with industry standards and best practices for data management. Job Requirements Strong experience in ...
Posted 1 month ago
7.0 - 10.0 years
3 - 6 Lacs
hyderabad, bengaluru
Work from Office
We are looking for skilled Hadoop Developers with 7-10 years of experience to join our team in Bangalore and Hyderabad. Roles and Responsibility Design, develop, and implement scalable data processing systems using Hadoop and Spark. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Troubleshoot and optimize system performance issues. Ensure data quality and integrity through data validation and testing procedures. Participate in code reviews and contribute to improving overall code quality. Job Requirements Strong proficiency in Hadoop, Scala, Spark, and SQL. Experience working with big ...
Posted 1 month ago
5.0 - 7.0 years
3 - 6 Lacs
hyderabad, pune
Work from Office
We are looking for a skilled Pyspark Data Engineer with 5 to 7 years of experience, located in Pune and Hyderabad. The ideal candidate will have expertise in Pyspark, Scala, Hive, Airflow, Control M, CICD, Python, Hadoop, Data Modelling, and experience with Jira, Confluence, GitHub, or other similar Agile Scrum technologies. Roles and Responsibility Design and develop scalable data pipelines using Pyspark and Airflow. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data systems using Hadoop and Data Modelling techniques. Troubleshoot and resolve complex technical issues related to data processing and analysis. Implemen...
Posted 1 month ago
7.0 - 12.0 years
6 - 10 Lacs
hyderabad
Work from Office
Looking for a skilled professional with expertise in Spark and Delta Lake to join our team. The ideal candidate will have 7-12 years of experience. Roles and Responsibility Design and implement data pipelines using Spark and Delta Lake as the data storage layer. Develop and optimize Spark applications using Python, Scala, or Java. Work with large datasets to ensure data quality and integrity. Implement data partitioning and schema evolution techniques using Delta Lake. Collaborate with cross-functional teams to integrate Spark applications with ETL processes. Troubleshoot and resolve issues related to Spark application performance and data quality. Job Requirements Strong understanding of Sp...
Posted 1 month ago
6.0 - 9.0 years
6 - 10 Lacs
hyderabad, telangana
Work from Office
We are looking for a skilled professional with 6-9 years of experience to lead and manage key projects, drive institutional fraud-related projects, and provide expertise on fraud and financial crime risk management. The ideal candidate will have a strong background in managing Falcon upgrades, AML updates, and merchant fraud migrations. Roles and Responsibility Lead and manage key projects like the Falcon upgrade across retail, institutional, and merchant sectors. Manage Falcon upgrades and drive institutional fraud-related projects. Provide expertise on fraud and financial crime risk management. Collaborate with financial institutions to develop effective fraud detection and prevention stra...
Posted 1 month ago
5.0 - 10.0 years
2 - 5 Lacs
hyderabad, bengaluru
Work from Office
We are looking for a skilled professional with expertise in StreamSets and Unix Scripting to join our team. The ideal candidate will have 5-10 years of experience in the relevant field. Roles and Responsibility Design, develop, and implement StreamSets solutions to meet business requirements. Troubleshoot and resolve technical issues related to StreamSets. Collaborate with cross-functional teams to ensure seamless integration of StreamSets with other systems. Develop and maintain Unix scripts for automation and data processing tasks. Perform root cause analysis (RCA) to identify and resolve complex technical issues. Install, patch, upgrade, administer, script, and maintain StreamSets cluster...
Posted 1 month ago
2.0 - 4.0 years
1 - 4 Lacs
hyderabad, chennai, bengaluru
Work from Office
We are looking for a highly skilled MLOps professional with 0 to 30 days of experience to join our team in Bangalore, Hyderabad, Pune, Chennai, and Kolkata. The ideal candidate will have expertise in MLOps, including Azure/AWS/GCP, Pyspark, and Bigdata. Roles and Responsibility Design and implement scalable data pipelines using MLOps tools. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale machine learning models and algorithms. Ensure seamless integration of MLOps solutions with existing systems and infrastructure. Troubleshoot and resolve complex technical issues related to MLOps. Stay up-to-date with industry trends a...
Posted 1 month ago
4.0 - 9.0 years
2 - 5 Lacs
chennai, gurugram, bengaluru
Work from Office
We are looking for a skilled Hadoop Admin with 4 to 9 years of experience. The ideal candidate will have expertise in Hadoop administration, CDP distribution, Dataflow, and Big data platform. Roles and Responsibility Manage and maintain large-scale Hadoop clusters for high availability and performance. Design and implement data pipelines using Kafka and Nifi profiles. Develop and execute Linux shell scripts for automation and monitoring. Collaborate with cross-functional teams to resolve technical issues and improve processes. Ensure compliance with security best practices and industry standards. Troubleshoot and optimize HBase databases for improved query performance. Job Requirements Stron...
Posted 1 month ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
123151 Jobs | Dublin
Wipro
40198 Jobs | Bengaluru
EY
32154 Jobs | London
Accenture in India
29674 Jobs | Dublin 2
Uplers
24333 Jobs | Ahmedabad
Turing
22774 Jobs | San Francisco
IBM
19350 Jobs | Armonk
Amazon.com
18945 Jobs |
Accenture services Pvt Ltd
18931 Jobs |
Capgemini
18788 Jobs | Paris,France