Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Mega Walk In TCS Chennai Hiring for Big Data Developer INTERVIEW DETAILS: Interview date: 8 th Nov 2025, Saturday Interview time: 9:00 AM - 12:00 PM Venue: TCS, Siruseri, GS 4, 2 nd Floor, Chennai Job Summary: Role : Big Data Developer Experience: 5 year to 10 years Job Location: Chennai Required Technical Skill Set: MS Azure Cloud Platform, Big Data with Hadoop, Python, Hive, Azure data and admin analysis Eligibility Criteria: •Minimum 15 years of regular education (10th + 12th + 3 years graduation) •BE/ B.Tech/MCA/M.Sc/MS with minimum 3 years of relevant experience post Qualification IT- Experience. • B.Sc/BCA Graduates with minimum 3.5 years of relevant experience post qualification IT Ex...
Posted 4 days ago
3.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Come join us in the Azure Core Economics team, explore your passions, and impact the world! If you join the Azure Core Economics team, you will join a group of economists, data scientists, and software engineers within Azure Engineering that tackles a variety of data-intensive, distributed computing challenges related to cloud economics that are of critical importance to Microsoft. Our team collaborates with a wide range of other teams in Microsoft on problems such as pricing optimization, demand estimation and forecasting, capacity management and planning, fraud detection, virtual machine bin packing, and others. Our software engineering team in India aids in the development of novel softwa...
Posted 5 days ago
5.0 - 10.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Greetings from TATA CONSULTANCY SERVICES! WALK-IN-DRIVE "Big Data" Interview Date: 8th November 2025 Years of Experience: 5 - 10 years Location: Chennai Job Description:- Hands on experience in installation, configuration, supporting and managing Hadoop clusters using Hortonworks distribution Hands on experience on clusters migration /Azure Cloud Upgrading the clusters and configured the Kerberos and Ranger on cluster. Good knowledge on shell scripting / PySpark Node commissioning and De-Commissioning on clusters and Data Re-balancing. Experience in setting up the High-Availability Hadoop Clusters. Experience in Distcp configuration between the clusters. Experience on Standby Ambari Server i...
Posted 6 days ago
7.0 - 11.0 years
0 Lacs
guwahati, assam
On-site
In this role at Zaloni, you will be a part of the Engineering team responsible for architecting, designing, and building Zaloni's flagship product - Arena. Your key responsibilities will include: - Taking technical ownership for the development and delivery of product features as a member of an agile team, involving design, code, test, technical documentation, technical training, and stakeholder presentations. - Providing subject matter expertise on multiple functional areas, collaborating with developers, testers, architects, documentation, and product managers to ensure high-quality feature deliverables. - Working on a Scrum Team using Agile principles, participating in backlog grooming, t...
Posted 6 days ago
10.0 years
0 - 1 Lacs
illinois, united states
Remote
Abbott is a global healthcare leader that helps people live more fully at all stages of life. Our portfolio of life-changing technologies spans the spectrum of healthcare, with leading businesses and products in diagnostics, medical devices, nutritionals and branded generic medicines. Our 114,000 colleagues serve people in more than 160 countries. We’re focused on helping people with diabetes manage their health with life-changing products that provide accurate data to drive better-informed decisions. We’re revolutionizing the way people monitor their glucose levels with our new sensing technology. Working at Abbott At Abbott, You Can Do Work That Matters, Grow, And Learn, Care For Yourself ...
Posted 6 days ago
6.0 - 11.0 years
8 - 13 Lacs
hyderabad
Work from Office
We are looking for a highly motivated and detail-oriented Catastrophe Data Analyst to join our team at Swiss Re. The ideal candidate should have 0 to 7 years of experience in the field. Roles and Responsibility Analyze and interpret catastrophe data to identify trends and patterns. Develop and maintain databases for storing and managing catastrophe data. Collaborate with cross-functional teams to design and implement new catastrophe models. Conduct research and stay updated on industry developments and advancements in catastrophe modeling. Provide insights and recommendations to stakeholders based on analysis results. Ensure data quality and integrity by implementing data validation and veri...
Posted 6 days ago
6.0 - 11.0 years
2 - 5 Lacs
bengaluru
Work from Office
We are seeking a talented and experienced Databricks Engineer to join our innovative team in Bengaluru, India. In this role, you will be responsible for designing, implementing, and optimising data pipelines and analytics solutions using the Databricks platform. You will collaborate closely with cross-functional teams to deliver high-quality, scalable data solutions that drive business insights and decision-making. Develop and optimise ETL processes using Databricks and Apache Spark Design and implement efficient data models and schemas for optimal data storage and retrieval Collaborate with data scientists and analysts to build and deploy machine learning models Ensure data quality, consist...
Posted 6 days ago
0.0 - 6.0 years
11 - 12 Lacs
gurugram
Work from Office
Design, develop, and maintain scalable ETL/ELT data pipelines on GCP (BigQuery, Dataflow, Cloud Composer, Pub/Sub, etc) Build and optimize data models and data marts in BigQuery for analytical and reporting use cases Ensure data quality, integrity, and security across the data lifecycle Implement data transformation logic using SQL, Python, and Cloud Dataflow/Dataproc Collaborate with business and analytics teams to understand data requirements and deliver efficient solutions Automate workflows and orchestrate pipelines using Cloud Composer (Airflow) Monitor and optimize BigQuery performance and manage cost efficiency Support CI/CD deployment processes and maintain version control using Git ...
Posted 6 days ago
6.0 - 11.0 years
14 - 17 Lacs
mumbai
Work from Office
Your role and responsibilities As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in impl...
Posted 6 days ago
5.0 - 10.0 years
3 - 6 Lacs
kolkata, hyderabad, bengaluru
Work from Office
We are looking for skilled Hadoop Developers with 5-10 years of experience to join our team in Bangalore, Kolkata, Hyderabad, and Pune. The ideal candidate should have strong proficiency in Hadoop, Scala, Spark, and SQL. Roles and Responsibility Design, develop, and implement scalable data processing systems using Hadoop. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines using Spark and Scala. Troubleshoot and resolve complex technical issues related to Hadoop. Participate in code reviews and ensure high-quality code standards. Stay updated with the latest trends and technologies in Hadoop development. Job...
Posted 6 days ago
7.0 - 12.0 years
3 - 7 Lacs
hyderabad, pune
Work from Office
We are looking for a skilled Hadoop professional with 7-14 years of experience to join our team in Pune and Hyderabad. The ideal candidate will have expertise in Hadoop, Spark, Kafka, Data Visualization, scripting languages in Python and R, SQL and NoSQL, Agile methodology, ETL / ELT data pipelines. Roles and Responsibility Design and develop scalable Hadoop applications using Spark and Kafka. Create data visualizations using tools like Splunk, Qlik, and Grafana. Develop and maintain ETL/ELT data pipelines using Agile methodology. Collaborate with cross-functional teams to identify and prioritize project requirements. Troubleshoot and resolve technical issues related to Hadoop applications. ...
Posted 6 days ago
5.0 - 10.0 years
8 - 12 Lacs
chennai, gurugram, bengaluru
Work from Office
We are looking for a skilled professional with 5-10 years of experience to join our team in Bangalore, Hyderabad, Chennai, Pune, Noida, and Gurgaon. The ideal candidate will have expertise in GCP Hadoop ecosystem and be able to handle batch data processing on the Hadoop ecosystem. Roles and Responsibility Design and develop scalable data pipelines using GCP Hadoop ecosystem. Process large datasets using Scala Spark, Hive, and other relevant tools. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain technical documentation for data processing systems. Troubleshoot and resolve issues related to data processing and system performance. Op...
Posted 6 days ago
2.0 - 5.0 years
5 - 9 Lacs
gurugram
Work from Office
About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Palantir Foundry, PySpark, Python (Programming Language), MySQL Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications ...
Posted 6 days ago
5.0 - 12.0 years
0 Lacs
chennai, tamil nadu, india
On-site
Greetings from TCS! TCS is hiring for Big Data Required Technical Skill Set - MS Azure Cloud Platform, Big Data with Hadoop, Python, Hive, Azure data and admin analysis Location: - Chennai Desired Experience Range: 5 to 12 years Must to Have Hands on experience in installation, configuration, supporting and managing Hadoop clusters using Hortonworks distribution Hands on experience on clusters migration /Azure Cloud Upgrading the clusters and configured the Kerberos and Ranger on cluster. Good knowledge on shell scripting / PySpark Node commissioning and De-Commissioning on clusters and Data Re-balancing. Experience in setting up the High-Availability Hadoop Clusters. Experience in Distcp co...
Posted 6 days ago
3.0 years
0 Lacs
mumbai, maharashtra, india
On-site
Hiring for Data Engineer role with Simple Logic IT Pvt Ltd at Navi Mumbai. Position : Data Engineer Department : IT Data Engineering Team Location : Belapur : Navi Mumbai Work Mode : Work From Office Note : Hadoop exprience is mandatory. About the Role: We are seeking a Data Engineer to design, implement, and optimize data pipelines and solutions, focusing on the Hadoop ecosystem and Cloudera platform. This role involves working with structured and unstructured data to build scalable business solutions in collaboration with analysts and business stakeholders. Key Responsibilities: -Ensure smooth flow of ETL processes and maintain data accuracy across systems. -Automate, monitor, and support ...
Posted 6 days ago
0 years
2 - 3 Lacs
chennai
On-site
Discover your future at Citi Working at Citi is far more than just a job. A career with us means joining a team of more than 230,000 dedicated people from around the globe. At Citi, you’ll have the opportunity to grow your career, give back to your community and make a real impact. Job Overview Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. >...
Posted 6 days ago
8.0 - 13.0 years
6 - 10 Lacs
chennai
Work from Office
Job Summary: We are looking for an experienced GCP Data Engineer to design, build, and optimize data pipelines and solutions on the Google Cloud Platform (GCP) . The ideal candidate will have deep expertise in BigQuery, Dataflow, Dataproc , and other GCP data services, along with strong programming and analytical skills. Key Responsibilities: Design and develop scalable data pipelines and ETL workflows using GCP services such as BigQuery, Dataflow, Dataproc, Composer, and Pub/Sub . Build and maintain data lakes, data warehouses, and analytics solutions on GCP. Collaborate with data scientists, analysts, and stakeholders to deliver clean, reliable, and structured datasets . Implement data ing...
Posted 6 days ago
3.0 - 8.0 years
7 - 11 Lacs
chennai
Work from Office
Job Summary: We are looking for an experienced GCP Data Engineer to design, build, and optimize data pipelines and solutions on the Google Cloud Platform (GCP) . The ideal candidate will have deep expertise in BigQuery, Dataflow, Dataproc , and other GCP data services, along with strong programming and analytical skills. Key Responsibilities: Design and develop scalable data pipelines and ETL workflows using GCP services such as BigQuery, Dataflow, Dataproc, Composer, and Pub/Sub . Build and maintain data lakes, data warehouses, and analytics solutions on GCP. Collaborate with data scientists, analysts, and stakeholders to deliver clean, reliable, and structured datasets . Implement data ing...
Posted 6 days ago
7.0 - 12.0 years
7 - 10 Lacs
chennai
Work from Office
Job Summary: We are looking for a Big Data Engineer with strong expertise in Apache Spark and Scala to design, develop, and optimize large-scale data processing solutions. The ideal candidate will have hands-on experience with distributed data systems, ETL development, and modern data engineering tools. Key Responsibilities: Design and develop data ingestion, transformation, and processing pipelines using Apache Spark (Core, SQL, Streaming) and Scala . Build scalable and reliable big data solutions on distributed platforms such as Hadoop, Databricks, or EMR . Optimize Spark jobs for performance and scalability . Work closely with data architects, analysts, and business teams to define data r...
Posted 6 days ago
7.0 - 12.0 years
6 - 10 Lacs
chennai
Work from Office
Job Summary: We are looking for an experienced GCP Data Engineer to design, build, and optimize data pipelines and solutions on the Google Cloud Platform (GCP) . The ideal candidate will have deep expertise in BigQuery, Dataflow, Dataproc , and other GCP data services, along with strong programming and analytical skills. Key Responsibilities: Design and develop scalable data pipelines and ETL workflows using GCP services such as BigQuery, Dataflow, Dataproc, Composer, and Pub/Sub . Build and maintain data lakes, data warehouses, and analytics solutions on GCP. Collaborate with data scientists, analysts, and stakeholders to deliver clean, reliable, and structured datasets . Implement data ing...
Posted 6 days ago
2.0 - 4.0 years
5 - 9 Lacs
mumbai, hyderabad, bengaluru
Work from Office
Spark, Hive, Scala, SQL ,Python scripting ,data visualization Key Responsibilities: Leadership and Team Management: Lead and mentor a team of data engineers, fostering a culture of collaboration and continuous improvement. Oversee the planning and execution of data projects, ensuring alignment with business objectives and timelines. Provide technical guidance and expertise to the team, promoting best practices in data engineering. Data Architecture and Development: Design, develop, and maintain scalable data pipelines and ETL processes to ingest, process, and store large volumes of data. Architect and implement robust data models and schemas to support analytics and business intelligence nee...
Posted 1 week ago
3.0 - 6.0 years
3 - 6 Lacs
navi mumbai, mumbai (all areas)
Work from Office
Hi Candidate, Opening for Data Scientist Location :Airoli (Navi Mumbai) Experience : 3+ yrs Key Responsibilities: Develop and maintain data pipelines using Hadoop ecosystem. Build and maintain Spark SQL applications for big data processing and analysis. Use Tableau to visualize data, create dashboards, and extract insights from Hadoop data. Optimize data processing performance, including query optimization and data storage strategies. Collaborate with other engineers and stakeholders to understand data requirements and deliver solutions. Troubleshoot and resolve issues related to data processing and visualization. Stay up to date with the latest technologies and best practices in the field. ...
Posted 1 week ago
4.0 - 9.0 years
7 - 11 Lacs
chennai, guindy
Work from Office
Overview Prodapt is looking for Lead MLOPS Engineers having experience of 4 to 8 years from chennai or bangalore locations. Responsibilities Requirements B.S./B.E/B.Tech/MTech in computer science with 4 to 8 years of experience or equivalent Experience with Big Data Platform and data processing frameworks (Hadoop, Hive, Map Reduce, Spark, etc.) and professional skill in writing optimal production grade SQL queries. Experience on deploying shallow learning and deep learning models, built using TensorFlow, PyTorch, H2O, SciKit-Learn, Keras, Spark MLlib, XGBoost, LightGBM etc. frameworks. Experience with cloud technologies specially with respect to Data and ML Engineering (GCP/AWS/Azure) Experi...
Posted 1 week ago
0 years
0 Lacs
chennai, tamil nadu, india
On-site
Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data trans...
Posted 1 week ago
5.0 - 10.0 years
18 - 22 Lacs
pune, gurugram
Work from Office
We are looking for a skilled Business Technology Solutions Consultant with 5-10 years of experience to join our team in India. The ideal candidate will have a strong background in Big Data Development and be able to work on innovative solutions. Roles and Responsibility Design, develop, and implement Big Data solutions using various technologies. Collaborate with cross-functional teams to identify business requirements and develop technical solutions. Develop and maintain large-scale data processing systems and pipelines. Ensure data quality, integrity, and security in all developed systems. Troubleshoot and resolve complex technical issues related to Big Data development. Stay updated with ...
Posted 1 week ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
128529 Jobs | Dublin
Wipro
41046 Jobs | Bengaluru
EY
33823 Jobs | London
Accenture in India
30977 Jobs | Dublin 2
Uplers
24932 Jobs | Ahmedabad
Turing
23421 Jobs | San Francisco
IBM
20492 Jobs | Armonk
Infosys
19613 Jobs | Bangalore,Karnataka
Capgemini
19528 Jobs | Paris,France
Accenture services Pvt Ltd
19518 Jobs |