Home
Jobs
Companies
Resume

141 Hdfs Jobs - Page 2

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Foundit logo

Introduction In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. In this role, youll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Data Engineer at IBM, youll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelors Degree Preferred education Masters Degree Required technical and professional expertise Experience with Apache Spark (PySpark): In-depth knowledge of Sparks architecture, core APIs, and PySpark for distributed data processing. Big Data Technologies: Familiarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in Python: Expertise in Python programming with a focus on data processing and manipulation. Data Processing Frameworks: Knowledge of data processing libraries such as Pandas, NumPy. SQL Proficiency: Experience writing optimized SQL queries for large-scale data analysis and transformation. Cloud Platforms: Experience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing

Posted 2 weeks ago

Apply

6.0 - 8.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Lead Analyst/Senior Software Engineer - Data Engineer with Python, Apache Spark, HDFS Job Overview : CGI is looking for a talented and motivated Data Engineer with strong expertise in Python, Apache Spark, HDFS, and MongoDB to build and manage scalable, efficient, and reliable data pipelines and infrastructure Youll play a key role in transforming raw data into actionable insights, working closely with data scientists, analysts, and business teams. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Python and Spark. Ingest, process, and transform large datasets from various sources into usable formats. Manage and optimize data storage using HDFS and MongoDB. Ensure high availability and performance of data infrastructure. Implement data quality checks, validations, and monitoring processes. Collaborate with cross-functional teams to understand data needs and deliver solutions. Write reusable and maintainable code with strong documentation practices. Optimize performance of data workflows and troubleshoot bottlenecks. Maintain data governance, privacy, and security best practices. Required qualifications to be successful in this role: Minimum 6 years of experience as a Data Engineer or similar role. Strong proficiency in Python for data manipulation and pipeline development. Hands-on experience with Apache Spark for large-scale data processing. Experience with HDFS and distributed data storage systems. Strong understanding of data architecture, data modeling, and performance tuning. Familiarity with version control tools like Git. Experience with workflow orchestration tools (e.g., Airflow, Luigi) is a plus. Knowledge of cloud services (AWS, GCP, or Azure) is preferred. Bachelors or Masters degree in Computer Science, Information Systems, or a related field. Preferred Skills: Experience with containerization (Docker, Kubernetes). Knowledge of real-time data streaming tools like Kafka. Familiarity with data visualization tools (e.g., Power BI, Tableau). Exposure to Agile/Scrum methodologies. Skills: Hadoop Hive Python SQL English Note This role will require- 8 weeks of in-office work after joining, after which we will transition to a hybrid working model, with 2 days per week in the office. Mode of interview F2F Time : Registration Window -9am to 12.30 pm. Candidates who are shortlisted will be required to stay throughout the day for subsequent rounds of interviews Notice Period: 0-45 Days

Posted 2 weeks ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Naukri logo

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree in computer science, Data Engineering, or related field with 8+ years of experience in data engineering with a focus on Big Data, Cloud, and Telecom Analytics. Hands-on expertise in Ab Initio for data cataloguing, metadata management, and lineage. Skills in data warehousing, OLAP, and modelling using BigQuery, Clickhouse, and SQL. Experience with data persistence technologies like S3, HDFS, and Iceberg. Hold on, Python and scripting languages. It would be nice if you also had: Experience with data exploration and visualization using Superset or BI tools. Knowledge in ETL processes and streaming tools such as Kafka. Background in building data products for the telecom domain and understanding of AI and machine learning pipeline integration. Data Governance: Manage source data within the Metadata Hub and Data Catalog. ETL Development: Develop and execute data processing graphs using Express It and the Co-Operating System. ETL Optimization: Debug and optimize data processing graphs using the Graphical Development Environment (GDE). API Integration: Leverage Ab Initio APIs for metadata and graph artifact management. CI/CD Implementation: Implement and maintain CI/CD pipelines for metadata and graph deployments. Team Leadership & Mentorship: Mentor team members and foster best practices in Ab Initio development and deployment.

Posted 2 weeks ago

Apply

6.0 - 8.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Naukri logo

Key Responsibilities: Design, build, and optimize large-scale data processing systems using distributed computing frameworks like Hadoop, Spark, and Kafka. Develop and maintain data pipelines (ETL/ELT) to support analytics, reporting, and machine learning use cases. Integrate data from multiple sources (structured and unstructured) and ensure data quality and consistency. Collaborate with cross-functional teams to understand data needs and deliver data-driven solutions. Implement data governance, data security, and privacy best practices. Monitor performance and troubleshoot issues across data infrastructure. Stay updated with the latest trends and technologies in big data and cloud computing. Required Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or a related field. 6+ years of experience in big data engineering or a similar role. Proficiency in big data technologies such as Hadoop, Apache Spark, Hive, and Kafka. Strong programming skills in Python. Experience with cloud platforms like AWS (EMR, S3, Redshift), GCP (BigQuery, Dataflow), or Azure (Data Lake, Synapse). Solid understanding of data modeling, ETL/ELT processes, and data warehousing concepts. Familiarity with CI/CD tools and practices for data engineering. Preferred Qualifications: Experience with orchestration tools like Apache Airflow or Prefect. Knowledge of real-time data processing and stream analytics. Exposure to containerization tools like Docker and Kubernetes. Certification in cloud technologies (e.g., AWS Certified Big Data – Specialty).

Posted 2 weeks ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru / Bangalore, Karnataka, India

Remote

Foundit logo

About the Role The Search platform currently powers Rider and Driver Maps, Uber Eats, Groceries, Fulfilment, Freight, Customer Obsession and many such products and systems across Uber. We are building a unified platform for all of Uber's search use-cases. The team is building the platform on OpenSearch. We are already supporting in house search infrastructure built on top of Apache Lucene. Our mission is to build a fully managed search platform while delivering a delightful user experience through low-code data and control APIs . We are looking for an Engineering Manager with strong technical expertise to define a holistic vision and help builda highly scalable, reliable and secure platform for Uber's core business use-cases. Come join our team to build search functionality at Uber scale for some of the most exciting areas in the marketplace economy today. An ideal candidate will be working closely with a highly cross-functional team, including product management, engineering, tech strategy, and leadership to drive our vision and build a strong team. A successful candidate will need to demonstrate strong technical skills, system architecture / design. Having experience on the open source systems and distributed systems is a big plus for this role. The EM2 role will require building a team of software engineers, while directly contributing on the technical side too. What the Candidate Will Need / Bonus Points ---- What the Candidate Will Do ---- Provide technical leadership, influence and partner with fellow engineers to architect, design and build infrastructure that can stand the test of scale and availability, while reducing operational overhead. Lead, manage and grow a team of software engineers. Mentor and guide the professional and technical development of engineers on your team, and continuously improve software engineering practices. Own the craftsmanship, reliability, and scalability of your solutions. Encourage innovation, implementation of ground breaking technologies, outside-of-the-box thinking, teamwork, and self-organization Hire top performing engineering talent and maintaining our dedication to diversity and inclusion Collaborate with platform, product and security engineering teams, and enable successful use of infrastructure and foundational services, and manage upstream and downstream dependencies ---- Basic Qualifications ---- Bachelor's degree (or higher) in Computer Science or related field. 10+ years of software engineering industry experience 8+ years of experience as an IC building large scale distributed software systems Outstanding technical skills in backend: Uber managers can lead from the front when the situation calls for it. 1+ years for frontline managing a diverse set of engineers ---- Preferred Qualifications ---- Prior experience with Search or big data systems - OpenSearch, Lucene, Pinot, Druid, Spark, Hive, HUDI, Iceberg, Presto, Flink, HDFS, YARN, etc preferred. We welcome people from all backgrounds who seek the opportunity to help build a future where everyone and everything can move independently. If you have the curiosity, passion, and collaborative spirit, work with us, and let's move the world forward, together. Offices continue to be central to collaboration and Uber's cultural identity. Unless formally approved to work fully remotely, Uber expects employees to spend at least half of their work time in their assigned office. For certain roles, such as those based at green-light hubs, employees are expected to be in-office for 100% of their time. Please speak with your recruiter to better understand in-office expectations for this role. .Accommodations may be available based on religious and/or medical conditions, or as required by applicable law. To request an accommodation, please reach out to .

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

5–12 years of experience in Big Data Proficient in Apache Spark with hands-on experience Proficient in Kafka and RabbitMQ messaging systems Skilled in Hive and Impala for Big Data querying Integrated data from RDBMS (SQL Server, Oracle), ERP

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Foundit logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Data Engineer to join our team in Ban/Hyd/Chn/Gur/Noida, Karn?taka (IN-KA), India (IN). 5 years experience in Spark Scala Sqoop Github SQL AWS Services: EMR, S3, LakeFormation, Glue, Athena, Lambda, Step Functions ControlM Cloudera services: hdfs, Hive, Impala Confluence, Jira, ServiceNow About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

4 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Job Location: Bangalore Experience: 4+ Years Job Type: FTE Note: Looking only for Immediate to 1 week joiners. Must be comfortable for Video discussion. JD KeySkills required : Option :1 Bigdata Hadoop + Hive + HDFS Python OR Scala - Language OR Option :2 Snowflake with Bigdata knowledge & Snowpark is preferred Python / Scala - Language Contact Person - Amrita Please share your updated profile to amrita.anandita@htcinc.com with the below mentioned details: Full Name (As per Aadhar card) - Total Exp. - Rel. Exp. (Bigdata Hadoop) - Rel. Exp. (Python) - Rel. Exp. (Scala) - Rel. Exp. (Hive) - Rel. Exp. (HDFS) - OR Rel. Exp. (Snowflake) - Rel. Exp. (Snowpark) - Highest Education (if has done B.Tech/ B.E, then specify) - Notice Period - If serving Notice or not working, then mention your last working day as per your relieving letter - CCTC - ECTC - Current Location - Preferred Location -

Posted 2 weeks ago

Apply

5.0 - 8.0 years

8 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Must Skills 5-8 Years Experience in Data Bricks, PySpark, SQL, Data warehousing Criterias Job Requirements General Job Description A seasoned, experienced professional with a full understanding of area of specialization; resolves a wide range of issues in creative ways. This job is the fully qualified, career-oriented, journey-level position. Pre - requisites Knowledge & Experience B.E (or equivalent) In-depth understanding of distributed data processing frameworks like Apache Spark , with specific expertise in Databricks . Proficiency in designing and building data pipelines for data extraction, transformation, and loading (ETL). Familiarity with big data technologies and concepts, including Hadoop, Hive, and HDFS (good to have). Proven experience in building scalable and high-performance data solutions for large datasets. Solid understanding of database design, and data warehousing concepts. Knowledge of both SQL and NoSQL databases , and ability to choose the right database type based on project requirements. Extensive hands-on experience with Databricks for big data processing and analytics. Ability to set up and configure Databricks clusters and optimize their performance. Proficiency in Spark Data Frame and Spark SQL for data manipulation and querying. Understanding of data architecture principles and experience in designing data solutions that meet scalability and reliability requirements. Familiarity with cloud-based data platforms like AWS or Azure. Problem-Solving and Analytical Skills: Strong problem-solving skills and the ability to analyse complex data-related issues. Capacity to propose innovative and efficient solutions to data engineering challenges. Excellent communication skills, both verbal and written, with the ability to convey technical concepts to non-technical stakeholders effectively. A strong inclination to stay updated with the latest advancements in data engineering, and Databricks technologies. Adaptability to new tools and technologies to support evolving data requirements. Required Product/project Knowledge Ability to work in an agile development environment. Hand on experience in technical design document preparation Proven experience in fine tuning and identifying the potential bottle necks on the applications Required Skills Ability to work on tasks (POCs, Stories, CR's, Defects etc.) without taking much help. Technical ability includes Programming, Debugging and Logical skills. Common Tasks Come up and follow process for: Technical compliance and documentation Code review Unit & Functional testing Deployment Ensures that the team is also following the process properly. Able to write at least two technical paper or present one tech talk in a year 100% Compliance to Sprint Plan. Required Soft Skills Providing technical leadership and mentoring to junior developers Collaboration and teamwork skills Self-motivated with strong initiative and excellent Communication Skills Abilities of becoming a technical activity leader Proactive and initiative approach Self-motivated, flexible and a team player Have good understanding of the requirements in the area of functionality being developed

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 18 Lacs

Coimbatore

Hybrid

Naukri logo

Role & responsibilities Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

10 - 17 Lacs

Kolkata, Hyderabad, Bengaluru

Hybrid

Naukri logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal / Lead Consultant - Big Data! In this role, as a trusted advisor to the business, establish partnerships, assess business needs, provide critical interpretation of data, form hypotheses and synthesize conclusions into recommendations, turning data analysis into insights Responsibilities: Design, develop, and maintain scalable data processing solutions using Python/Scala and Spark per business needs • Implement and manage CI/CD pipelines to automate the deployment process. • Hands on experience with Cribl an added advantage. • Perform production bug fixes to maintain system stability and reliability. • Develop scalable microservices to be deployed on Hybrid cloud platforms like GCP Anthos • Strong understanding of the Hadoop ecosystem, including HDFS, Hive, Hbase and other related technologies. • Stay updated with the latest industry trends and technologies in big data and cloud computing. Qualifications we seek in you! Minimum qualifications: Bachelor's/Graduation/Equivalent: BE/B- Tech, MCA, MBA • Excellent Communication skill and effectively interact with business user • Ability to interact with business as well as technical teams • Ability to learn and respond quickly to the fast-changing business environment • Ability to multitask and excellent interpersonal skills • Analytical mindset with a problem-solving approach Preferred qualifications: • Very good written and presentation / verbal communication skills with experience of customer interfacing role. • In-depth requirement understanding skills with good analytical and problem-solving ability, interpersonal efficiency, and positive attitude. • Good knowledge of writing complex SQL • Ability to work in an agile environment with a focus on continuous improvement • Self-motivated and eager to learn • A team player and able to lead and initiate • Experience in the telecommunication industry • Experience with cloud providers (e.g., AWS) Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at www.genpact.com and on X, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 2 weeks ago

Apply

3.0 - 6.0 years

13 - 23 Lacs

Gurugram

Work from Office

Naukri logo

Looking for an experienced Big Data Developer to develop, maintain, and optimize our big data solutions. Experience in Java, Spark, API development, Hadoop, HDFS, Hive, HBase, Kafka

Posted 2 weeks ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities Role - Hadoop Admin + Automation Experience 8+ yrs Grade AVP Location - Pune Mandatory Skills : Hadoop Admin, Automation (Shell scripting/ any programming language Java/Python), Cloudera / AWS/Azure/GCP Good to have : DevOps tools Primary focus will be on candidates with Hadoop admin & Automation experience,

Posted 2 weeks ago

Apply

5.0 - 6.0 years

18 - 25 Lacs

Gurugram, Sector-20

Work from Office

Naukri logo

5-7 years of experience in Solution, Design and Development of Cloud based data models, ETL Pipelines and infrastructure for reporting, analytics, and data science. Experience working with Spark, Hive, HDFS, MR, Apache Kafka/AWS Kinesis Experience with version control tools (Git, Subversion) Experience using automated build systems (CI/CD) Experience working in different programming languages (Java, python, Scala) Experience working with both structured and unstructured data. Strong proficiency with SQL and its variation among popular databases Ability to create the data model from scratch. Experience with some of the modern relational databases Skilled at optimizing large complicated SQL statements Knowledge of best practices when dealing with relational databases Capable of configuring popular database engines and orchestrating clusters as necessary Ability to plan resource requirements from high level specifications Capable of troubleshooting common database issues. Experience of Data Structures and algorithms Knowledge of different databases technologies (Relational, NoSQL, Graph, Document, Key-Value, Time Series, etc). This should include building and managing scalable data models. Knowledge of ML model deployment Knowledge of Cloud based platforms (AWS) Knowledge of TDD/BDD Strong desire to improve upon their skills in software development, frameworks, and technologies.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

12 - 13 Lacs

Thane, Navi Mumbai, Pune

Work from Office

Naukri logo

We at Acxiom Technologies are hiring for Pyspark Developer for Mumbai Location Relevant Experience : 1 to 4 Years Location : Mumbai Mode of Work : Work From Office Notice Period : Upto 20 days. Job Description: Proven experience as a Pyspark Developer . Hands-on expertise with AWS Redshift . Strong proficiency in Pyspark , Spark , Python , and Hive . Solid experience with SQL . Excellent communication skills. Benefits of working at Acxiom: - Statutory Benefits - Paid Leaves - Phenomenal Career Growth - Exposure to Banking Domain About Acxiom Technologies: Acxiom Technologies is a leading software solutions services company that provides consulting services to global firms and has established itself as one of the most sought-after consulting organizations in the field of Data Management and Business Intelligence. Also here is our website address https://www.acxtech.co.in/ to give you a detailed overview of our company. Interested Candidates can share their resumes on 7977418669 Thank you.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

15 - 19 Lacs

Mumbai

Work from Office

Naukri logo

Overview The Data Technology team at MSCI is responsible for meeting the data requirements across various business areas, including Index, Analytics, and Sustainability. Our team collates data from multiple sources such as vendors (e.g., Bloomberg, Reuters), website acquisitions, and web scraping (e.g., financial news sites, company websites, exchange websites, filings). This data can be in structured or semi-structured formats. We normalize the data, perform quality checks, assign internal identifiers, and release it to downstream applications. Responsibilities As data engineers, we build scalable systems to process data in various formats and volumes, ranging from megabytes to terabytes. Our systems perform quality checks, match data across various sources, and release it in multiple formats. We leverage the latest technologies, sources, and tools to process the data. Some of the exciting technologies we work with include Snowflake, Databricks, and Apache Spark. Qualifications Core Java, Spring Boot, Apache Spark, Spring Batch, Python. Exposure to sql databases like Oracle, Mysql, Microsoft Sql is a must. Any experience/knowledge/certification on Cloud technology preferrably Microsoft Azure or Google cloud platform is good to have. Exposures to non sql databases like Neo4j or Document database is again good to have. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com

Posted 2 weeks ago

Apply

10.0 - 12.0 years

7 - 11 Lacs

Noida

Work from Office

Naukri logo

Objective. CLOUDSUFI is seeking for a hands-on Delivery Lead of Client Services, who will be responsible for all client interfaces within the assigned scope. He/she will work together with technical leads/architects to create an execution plan in consultation with the customer stakeholders and drive the execution with team wrt. People, Process and Structure. Key KPIs for this role are Gross Margin, Customer Advocacy (NPS), ESAT (Employee Satisfaction),Penetration (Net New) and Target Revenue realization. Location: The job location for this role will be Noida, India. Key Responsibilities. - Develop and own vision, strategy and roadmap for the account. - Participate in business reviews with executive leadership. - Own weekly and monthly dashboards. - and reporting packages to business and leadership. - Actively mine new opportunities within account by cross/up selling. - Customer centric approach which includes understanding of expectations/prioritizations of customer. needs. - Review the capacity and timeline of the deliverables on an ongoing basis. - Identify and address technical and operational challenges or risks during project execution and. develop action plans for mitigation and aversion of identified risks. - Assess the team hurdles in their development cycles and provide critical thinking. - Assign tasks, track deliverables, and remove impediments to team members. - Create and maintain delivery best practices and audit the implementation of processes and best practices in the organization. - Contribute towards the creation of a knowledge repository, reusable templates reports/dashboards. - Supports development of others and self with effective feedback, recognition and coaching. - Solid track record of successfully leading sizable teams, with a preference for experience in offshore delivery setups. - Established success in achieving business goals using analytics-driven solutions. - Past experience in collaborating with international stakeholders and directing diverse teams. Key Qualifications. - Education Background: BTech/ BE / BS / MS / MBA (Tier 1). - Professional Experience: 12+. - A minimum of 10 years of professional experience in the software/product development domain. - 3+ years of experience as Technical Product/Program Manager, with a track record of leading sizable teams 30+ team members. - Product Mindset to provide improvement points to enhance features and customer experience. - Should have thorough understanding of DevOps philosophy and driven the same in past projects. - Must have performed product/program delivery for a leading cloud partner of size more than USD.5M annually and grown the account through delivery excellence. - Hands-on experience of agile project management. - Must be conversant in performing agile ceremonies i.e Daily Scrum calls, Retro and Customer Demos and Velocity calculation. Technical Proficiency:. - Expertise in managing software development projects around java, open-source technologies and data integration products. - Experience on GCP is mandatory. - Experience on Java , Scala, Maven, Test Coverage, Code Review, Quality Deliverables with Automation. - Product development experience on HDFS, with programming experience on spark. - Should be well verse with current technology trends in IT Solutions eg Cloud Platform development, DevOps, Low Code solutions, Intelligent Automation. Good to Have: Git, Bitbucket, Microservices. Desired Certifications (Mandatory): Certified GCP Solution Architect and/or TOGAF Enterprise Architect. Leadership & Management:. - Must have experience working with customers across North America and Europe. - Written communication, technical articulation, listening and presentation skills (8/10 minimum). - Should have good conflict management and superior persuasive and negotiation skills. - Should have demonstrated effective task prioritization, time management and internal/external stakeholder management skills. - Should be a quick learner, self-starter, go-getter and team player. - Comfortable and flexible to operate in a matrix structure. - Should have demonstrated appreciable Organizational Citizenship Behaviour (OCB) in past organizations. - Proven experience in refining processes to enhance operational efficiency. - Robust project management skills with an emphasis on punctual delivery and quality assurance.

Posted 2 weeks ago

Apply

12.0 - 20.0 years

30 - 35 Lacs

Navi Mumbai

Work from Office

Naukri logo

Job Title: Big Data Developer and Project Support & Mentorship Location: Mumbai Employment Type: Full-Time/Contract Department: Engineering & Delivery Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

5 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9 Key Responsibilities Work with Product Owners and various stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions and design the scale out architecture for data platform to meet the requirements of the proposed solution. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques, and business strategies. Play an active role in leading team meetings and workshops with clients. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own the technical product backlogs for data projects, help the team to close the backlogs in right time. Help us to shape the next generation of our products. Assess the effectiveness and accuracy of new data sources and data gathering techniques. Lead data mining and collection procedures Ensure data quality and integrity Interpret and analyze data problems Develop custom data models and algorithms to apply to data set Coordinate with different functional teams to implement models and monitor outcomes Develop processes and tools to monitor and analyze model performance and data accuracy Responsible to understand the client requirement and architect robust data platform on multiple cloud technologies. Responsible for creating reusable and scalable data pipelines Work with DE/DA/ETL/QA/Application and various other teams to remove roadblocks Align data projects with organizational goals. Skills & Qualifications Were looking for someone with 4-7 years of experience having worked through large data engineering porjects Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field. Strong problem-solving skills with an emphasis on product development Domain - Big Data, Data Platform, Distributed Systems Coding - any language (Java/scala/python) (most import requirement) with strong knowledge of Spark Ingestion skills - one of apache storm, flink, spark Streaming skills - one of kafka, kinesis, oplogs, binlogs, debizium Database skills – HDFS, Delta Lake/Iceberg, Lakehouse If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9

Posted 3 weeks ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Kochi

Work from Office

Naukri logo

Skill: - Databricks Experience: 5 to 14 years Location: - Kochi (Walk in on 14th June) Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

Core technical skills in Big Data (HDFS, Hive, Spark, HDP/CDP, ETL pipeline, SQL, Ranger, Python), Cloud (either AWS or Azure preferably both) services (S3/ADLS, Delta Lake, KeyVault, Hashicorp, Splunk), DevOps, preferably Data Quality Governance Knowledge, preferably hands-on experience in tools such DataIku/Dremio or any similar tools or knowledge on any such tools. Should be able to lead project and report timely status. Should ensure smooth release management Strategy Responsibilities include development, testing and support required for the project Business IT-Projects-CPBB Data Technlgy Processes As per SCB Governance People Talent Applicable to SCB Guidelines Risk Management Applicable to SCB standards Key Responsibilities Regulatory Business Conduct Display exemplary conduct and live by the Group s Values and Code of Conduct. Take personal responsibility for embedding the highest standards of ethics, including regulatory and business conduct, across Standard Chartered Bank. This includes understanding and ensuring compliance with, in letter and spirit, all applicable laws, regulations, guidelines and the Group Code of Conduct. Lead the team to achieve the outcomes set out in the Bank s Conduct Principles: [Fair Outcomes for Clients; Effective Financial Markets; Financial Crime Compliance; The Right Environment. ] Effectively and collaboratively identify, escalate, mitigate and resolve risk, conduct and compliance matters Key stakeholders Athena Program Other Responsibilities Analysis, Development, Testing and Support , Leading the team, Release management Skills and Experience Hadoop SQL HDFS Hive Python ETL Process ADO Confluence DataWarehouse Condepts Delivery Process Knowledge Qualifications Hadoop, HDFS, HBASE, Spark, Scala, ADO Confluence, ETL Process, SQL(Expert), Dremio(Entry), Dataiku(Entry) About Standard Chartered Were an international bank, nimble enough to act, big enough for impact. For more than 170 years, weve worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If youre looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we cant wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, youll see how we value difference and advocate inclusion. Together we: Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. www. sc. com/careers 29582

Posted 3 weeks ago

Apply

5.0 - 8.0 years

9 - 13 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 5+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 3 weeks ago

Apply

2.0 - 4.0 years

6 - 10 Lacs

Pune

Hybrid

Naukri logo

So, what’s the role all about? We are looking for a highly driven and technically skilled Software Engineer to lead the integration of various Content Management Systems with AWS Knowledge Hub, enabling advanced Retrieval-Augmented Generation (RAG) search across heterogeneous customer data—without requiring data duplication. This role will also be responsible for expanding the scope of Knowledge Hub to support non-traditional knowledge items and enhance customer self-service capabilities. You will work at the intersection of AI, search infrastructure, and developer experience to make enterprise knowledge instantly accessible, actionable, and AI-ready. How will you make an impact? Integrate CMS with AWS Knowledge Hub to allow seamless RAG-based search across diverse data types—eliminating the need to copy data into Knowledge Hub instances. Extend Knowledge Hub capabilities to ingest and index non-knowledge assets, including structured data, documents, tickets, logs, and other enterprise sources. Build secure, scalable connectors to read directly from customer-maintained indices and data repositories. Enable self-service capabilities for customers to manage content sources using App Flow, Tray.ai, configure ingestion rules, and set up search parameters independently. Collaborate with the NLP/AI team to optimize relevance and performance for RAG search pipelines. Work closely with product and UX teams to design intuitive, powerful experiences around self-service data onboarding and search configuration. Implement data governance, access control, and observability features to ensure enterprise readiness. Have you got what it takes? Proven experience with search infrastructure, RAG pipelines, and LLM-based applications. 2+ Years’ hands-on experience with AWS Knowledge Hub, AppFlow, Tray.ai, or equivalent cloud-based indexing/search platforms. Strong backend development skills (Python, Typescript/NodeJS, .NET/Java) and familiarity with building and consuming REST APIs. Infrastructure as a code (IAAS) service like AWS Cloud formation, CDK knowledge Deep understanding of data ingestion pipelines, index management, and search query optimization. Experience working with unstructured and semi-structured data in real-world enterprise settings. Ability to design for scale, security, and multi-tenant environment. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Reporting into: Tech Manager, Engineering, CX Role Type: Individual Contributor

Posted 3 weeks ago

Apply

3.0 - 7.0 years

10 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Responsibilities Design & Develop new automation framework for ETL processing Support existing framework and become technical point of contact for all related teams. Enhance existing ETL automation framework as per user requirements Performance tuning of spark, snowflake ETL jobs New technology POC and suitability analysis for Cloud migration. Process optimization with the help of automation and new utility development. Support any batch issue Support application team teams with any queries Required Skills Must be strong in UNIX Shell, Python scripting knowledge Must be strong in Spark Must have strong knowledge of SQL Hands-on knowledge on how HDFS/Hive/Impala/Spark works Strong in logical reasoning capabilities Should have working knowledge of Github, DevOps, CICD/ Enterprise code management tools Strong collaboration and communication skills Must possess strong team-player skills and should have excellent written and verbal communication skills Ability to create and maintain a positive environment of shared success. Ability to execute and prioritize a tasks and resolve issues without aid from direct manager or project sponsor. Good to have working experience on snowflake & any data integration tool i.e. informatica cloud Primary skills Apache Hadoop Apache Spark Unix Shell scripting Python SQL Good to have skills: Snowflake/Azure/AWS any cloud IDMC/any ETL tool

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Foundit logo

Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose - the relentless pursuit of a world that works better for people - we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of Principal Consultant In this role, as a trusted advisor to the business, establish partnerships, assess business needs, provide critical interpretation of data, form hypotheses and synthesize conclusions into recommendations, turning data analysis into insights Responsibilities: . Design, develop, and maintain scalable data processing solutions using Python/Scala and Spark per business needs . Implement and manage CI/CD pipelines to automate the deployment process. . Hands on experience with Cribl an added advantage. . Perform production bug fixes to maintain system stability and reliability. . Develop scalable microservices to be deployed on Hybrid cloud platforms like GCP Anthos . Strong understanding of the Hadoop ecosystem, including HDFS, Hive, Hbase and other related technologies. . Stay updated with the latest industry trends and technologies in big data and cloud computing. Qualifications we seek in you! Minimum qualifications: . Bachelor%27s/Graduation/Equivalent: BE/B- Tech, MCA, MBA . Excellent Communication skill and effectively interact with business user . Ability to interact with business as well as technical teams . Ability to learn and respond quickly to the fast-changing business environment . Ability to multitask and excellent interpersonal skills . Analytical mindset with a problem-solving approach Preferred qualifications: . Very good written and presentation / verbal communication skills with experience of customer interfacing role. . In-depth requirement understanding skills with good analytical and problem-solving ability, interpersonal efficiency, and positive attitude. . Good knowledge of writing complex SQL . Ability to work in an agile environment with a focus on continuous improvement . Self-motivated and eager to learn . A team player and able to lead and initiate . Experience in the telecommunication industry . Experience with cloud providers (e.g., AWS) Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. Get to know us at www.genpact.com and on X, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies