Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
15 - 19 Lacs
Mumbai
Work from Office
Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. O...
Posted 5 months ago
10.0 - 12.0 years
7 - 11 Lacs
Noida
Work from Office
Objective. CLOUDSUFI is seeking for a hands-on Delivery Lead of Client Services, who will be responsible for all client interfaces within the assigned scope. He/she will work together with technical leads/architects to create an execution plan in consultation with the customer stakeholders and drive the execution with team wrt. People, Process and Structure. Key KPIs for this role are Gross Margin, Customer Advocacy (NPS), ESAT (Employee Satisfaction),Penetration (Net New) and Target Revenue realization. Location: The job location for this role will be Noida, India. Key Responsibilities. - Develop and own vision, strategy and roadmap for the account. - Participate in business reviews with ex...
Posted 5 months ago
12.0 - 20.0 years
30 - 35 Lacs
Navi Mumbai
Work from Office
Job Title: Big Data Developer and Project Support & Mentorship Location: Mumbai Employment Type: Full-Time/Contract Department: Engineering & Delivery Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: ...
Posted 5 months ago
4.0 - 8.0 years
5 - 12 Lacs
Bengaluru
Work from Office
If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9 Key Responsibilities Work with Product Owners and various stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions and design the scale out architecture for data platform to meet the requirements of the proposed solution. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques, and business strategies. Play an active role in leading team meetings and workshops with clients. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own th...
Posted 5 months ago
6.0 - 9.0 years
27 - 42 Lacs
Kochi
Work from Office
Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to ...
Posted 5 months ago
6.0 - 10.0 years
11 - 16 Lacs
Bengaluru
Work from Office
Core technical skills in Big Data (HDFS, Hive, Spark, HDP/CDP, ETL pipeline, SQL, Ranger, Python), Cloud (either AWS or Azure preferably both) services (S3/ADLS, Delta Lake, KeyVault, Hashicorp, Splunk), DevOps, preferably Data Quality Governance Knowledge, preferably hands-on experience in tools such DataIku/Dremio or any similar tools or knowledge on any such tools. Should be able to lead project and report timely status. Should ensure smooth release management Strategy Responsibilities include development, testing and support required for the project Business IT-Projects-CPBB Data Technlgy Processes As per SCB Governance People Talent Applicable to SCB Guidelines Risk Management Applicabl...
Posted 5 months ago
5.0 - 8.0 years
9 - 13 Lacs
Pune
Hybrid
So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate C...
Posted 5 months ago
3.0 - 7.0 years
10 - 20 Lacs
Bengaluru, Mumbai (All Areas)
Work from Office
Responsibilities Design & Develop new automation framework for ETL processing Support existing framework and become technical point of contact for all related teams. Enhance existing ETL automation framework as per user requirements Performance tuning of spark, snowflake ETL jobs New technology POC and suitability analysis for Cloud migration. Process optimization with the help of automation and new utility development. Support any batch issue Support application team teams with any queries Required Skills Must be strong in UNIX Shell, Python scripting knowledge Must be strong in Spark Must have strong knowledge of SQL Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical ...
Posted 5 months ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant In this role, as a trusted advisor to the business, establish partnerships, assess busines...
Posted 5 months ago
10.0 - 12.0 years
35 - 40 Lacs
Bengaluru
Work from Office
Number of Openings* 2 Approved ECMS RQ# * 527250 Duration of contract* 6 months Total Yrs. of Experience* 10+ Relevant Yrs. of experience* 8+ Detailed JD *(Roles and Responsibilities) Lead Consultant An experience Big data professional who will technically lead the team in design and development Responsible for team delivery Mandatory skills* HDFS, Ozone, Hive, Impala, Spark, Atlas, Ranger Kafka, Flink, Spark Streaming Python/PySpark Excellent communication Experienced in design of data landscape Desired skills* GraphQL, Venafi (Certificate Mgt), Collibra, Azure DevOps Domain* Telecom Approx. vendor billing rate* (INR/Month) Excluding service tax 11500 INR/day Work Location* Bangalore, India...
Posted 5 months ago
2.0 - 3.0 years
4 - 5 Lacs
Hyderabad
Work from Office
Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 2-4 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hive, PIG and YARN Knowledge on ingestion tool - Stream se...
Posted 5 months ago
6.0 - 8.0 years
8 - 10 Lacs
Hyderabad
Hybrid
Position: Senior Software Engineer Location: Hyderabad Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 6-8 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hiv...
Posted 5 months ago
2.0 - 5.0 years
4 - 7 Lacs
Chennai
Work from Office
Experience: 2+ yrs of experience in IT, with At least 1+ years of experience with cloud and system administration. At least 2 years of experience with and strong understanding of 'big data' technologies in Hadoop ecosystem Hive, HDFS, Map/Reduce, Flume, Pig, Cloudera, HBase Sqoop, Spark etc. Job Overview: Smartavya Analytica Private Limited is seeking an experienced Hadoop Administrator to manage and support our Hadoop ecosystem. The ideal candidate will have strong expertise in Hadoop cluster administration, excellent troubleshooting skills, and a proven track record of maintaining and optimizing Hadoop environments. Key Responsibilities: • Install, configure, and manage Hadoop clusters, in...
Posted 5 months ago
7.0 - 9.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
Remote
What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Partner with engineers, analysts, and product managers to define technical solutions that support business goals Contribute to the architecture and implementation of distributed data systems and platforms Identify inefficiencies in data processing and proactively drive improvements in performance, reliability, and cost Serve as a thought leader and mentor in data engineering best practices across the organization ---- Basic Qualifications ---- 7+ years of hands-on experience in software engineering with a focus on data engineering Proficiency in at least one programming language such as Python, Java, or Scala St...
Posted 5 months ago
6.0 - 11.0 years
22 - 35 Lacs
Chennai
Hybrid
Job Location: Chennai Notice Period: Immediate - 30 Days MAX Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Goo...
Posted 5 months ago
5.0 - 10.0 years
9 - 19 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Job Title: Big Data Administrator Location : Hyderabad (Weekly once WFO) Experience : 5+ Years Department : Data Engineering / IT Infrastructure Job Summary: We are seeking a Big Data Administrator with strong expertise in Linux systems, AWS infrastructure, and Big Data technologies. This role is ideal for someone experienced in managing large-scale Hadoop ecosystems in production, with a deep understanding of observability, performance tuning, and automation using tools like Terraform or Ansible. Key Responsibilities: Manage and maintain large-scale Big Data clusters (Cloudera, Hortonworks, or AWS EMR) Develop and support infrastructure as code using Terraform or Ansible Administer Hadoop e...
Posted 5 months ago
5 - 10 years
8 - 14 Lacs
Kolkata
Work from Office
Role : Data Engineer - Azure Synapse Analytics - Experience in Data engineering projects using Microsoft Azure platform (Min 2-3 projects) - Strong expertise in data engineering tools and storage such as Azure ADLS Gen2, Blob storage - Experience implementing automated Synapse pipelines - Ability to implement Synapse pipelines for data integration ETL/ELT using Synapse studio - Experience integrating Synapse notebooks and Data Flow - Should be able to troubleshoot pipelines - Strong T-SQL programming skills or with any other flavor of SQL - Experience working with high volume data, large objects - Experience working in DevOps environments integrated with GIT for version control and CI/CD pip...
Posted 5 months ago
5 - 7 years
8 - 14 Lacs
Hyderabad
Work from Office
Responsibilities for this position include : - Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies - Write and tune complex Java, MapReduce, Pig and Hive jobs - Adapt quickly to change in requirements and be willing to work with different technologies if required - Experience leading a Backend/Distributed Data Systems team while remaining hands-on is very important - Lead the effort to build, implement and support the data infrastructure - Manage the business intelligence team and vendor partners, ensuring to prioritize projects...
Posted 5 months ago
5 - 7 years
8 - 14 Lacs
Surat
Work from Office
Responsibilities for this position include : - Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies - Write and tune complex Java, MapReduce, Pig and Hive jobs - Adapt quickly to change in requirements and be willing to work with different technologies if required - Experience leading a Backend/Distributed Data Systems team while remaining hands-on is very important - Lead the effort to build, implement and support the data infrastructure - Manage the business intelligence team and vendor partners, ensuring to prioritize projects...
Posted 5 months ago
5 - 7 years
8 - 14 Lacs
Bengaluru
Work from Office
Responsibilities for this position include : - Provides technical leadership in Big Data space (Hadoop Stack like M/R, HDFS, Pig, Hive, HBase, Flume, Sqoop, NoSQL stores like Cassandra, HBase etc) across Fractal and contributes to open source Big Data technologies - Write and tune complex Java, MapReduce, Pig and Hive jobs - Adapt quickly to change in requirements and be willing to work with different technologies if required - Experience leading a Backend/Distributed Data Systems team while remaining hands-on is very important - Lead the effort to build, implement and support the data infrastructure - Manage the business intelligence team and vendor partners, ensuring to prioritize projects...
Posted 5 months ago
5 - 7 years
10 - 14 Lacs
Chennai
Work from Office
Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark Roles and Responsibilities Skills: 5+ years of experience with Java + Bigdata as minimum required skill . Java, Micorservices ,Sprintboot, API ,Bigdata-Hive, Spark,Pyspark
Posted 5 months ago
5 - 7 years
0 - 0 Lacs
Thiruvananthapuram
Work from Office
Key Responsibilities: Big Data Architecture: Design, develop, and maintain scalable and distributed data architectures capable of processing large volumes of data. Data Storage Solutions: Implement and optimize data storage solutions using technologies such as Hadoop , Spark , and PySpark . PySpark Development: Develop and implement efficient ETL processes using PySpark to extract, transform, and load large datasets. Performance Optimization: Optimize PySpark applications for better performance, scalability, and resource management. Qualifications: Proven experience as a Big Data Engineer with a strong focus on PySpark . Deep understanding of Big Data processing frameworks and technologies. ...
Posted 5 months ago
12 - 16 years
35 - 40 Lacs
Bengaluru
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance ...
Posted 5 months ago
12 - 16 years
35 - 40 Lacs
Chennai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance ...
Posted 5 months ago
12 - 16 years
35 - 40 Lacs
Mumbai
Work from Office
As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance ...
Posted 5 months ago
        Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
            
        
                            
                            Accenture
123151 Jobs | Dublin
                            
                            Wipro
40198 Jobs | Bengaluru
                            
                            EY
32154 Jobs | London
                            
                            Accenture in India
29674 Jobs | Dublin 2
                            
                            Uplers
24333 Jobs | Ahmedabad
                            
                            Turing
22774 Jobs | San Francisco
                            
                            IBM
19350 Jobs | Armonk
                            
                            Amazon.com
18945 Jobs |
                            
                            Accenture services Pvt Ltd
18931 Jobs |
                            
                            Capgemini
18788 Jobs | Paris,France