Home
Jobs

3786 Hadoop Jobs - Page 17

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

7 - 9 Lacs

Gurugram

Work from Office

Naukri logo

Skilled Multiple GCP services - GCS, Big Query, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets MA Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills

Posted 3 days ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Experience with data integration and ETL tools.- Strong understanding of data modeling and database design principles.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 5 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Chief Technology Officer (CTO) Role Overview: We are seeking a visionary Chief Technology Officer to lead our technology function and drive the development of innovative AdTech solutions. In this leadership role, you will define and implement the company's technical strategy while overseeing engineering, data science, and product technology teams. Your focus will be on building scalable, high-performance platforms including RTB, DSP, and SSP systems. Key Responsibilities: Develop and execute a forward-looking technology roadmap aligned with business goals. Lead cross-functional teams in engineering and product development. Architect and manage real-time bidding systems, data infrastructure, and platform scalability. Drive innovation in AI/ML, big data, and real-time analytics. Ensure system reliability, security, DevOps, and data privacy best practices. Collaborate with leadership to deliver impactful tech-driven products. Represent the company in technical partnerships and industry events. Requirements: 10+ years in software engineering, with 5+ in a leadership role. Strong background in AdTech (RTB, DSP, SSP, OpenRTB). Expertise in AI/ML, cloud (AWS/GCP), and big data (Kafka, Spark, Hadoop). Proven experience in building scalable backend systems and leading high-performing teams. Bachelor’s or Master’s in Computer Science or Engineering; MBA/PhD is a plus. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less

Posted 3 days ago

Apply

2.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Requirements Minimum of 2-3 years of FullStack Software Development experience in building large-scale mission-critical applications. Strong foundation in computer science, with strong competencies in data structures, algorithms, and software design optimized for building highly distributed and parallelized systems. Proficiency in one or more programming languages - Java and Python. Strong hands-on experience in MEAN, MERN, Core Java, J2EE technologies, Microservices, Spring, Hibernate, SQL, REST APIs. Experience in web development using one of the technologies, like Angular or React, etc. Experience with one or more of the following database technologies: SQL Server, Postgres, MySQL, and NoSQL such as HBase, MongoDB, and DynamoDB. Strong problem-solving skills to deep dive, brainstorm, and choose the best solution approach. Experience with AWS Services like EKS, ECS, S3 EC2 RDS, Redshift, and Github/Stash, CI/CD Pipelines, Maven, Jenkins, Security Tools, Kubernetes/VMs/Linux, Monitoring, Alerting, etc. Experience in Agile development is a big plus. Excellent presentation, collaboration, and communication skills required. Result-oriented and experienced in leading broad initiatives and teams. Knowledge of Big Data technologies like Hadoop and Hive, Spark, Kafka, etc. would be an added advantage. Bachelor's or Master's degree in mathematics, Computer Science. 1-4 years of experience as a FullStackEngineer. Proven analytic skills and designing scalable applications. This job was posted by Vivek Chhikara from Protium. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Cloud DevOps Architect Location: Pune, India Experience: 10 - 15 Years Work Mode: Full-time, Office-based Company : Smartavya Analytica Private Limited Company Overview: Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for OnPremises, Private, and Public Cloud Platforms. Visit us at: https://smart-analytica.com Job Summary: We are looking for an accomplished Cloud DevOps Architect to design and implement robust DevOps and Infrastructure Automation frameworks across Azure, GCP, or AWS environments. The ideal candidate will have a deep understanding of CI/CD , IaC , VPC Networking , Security , and Automation using Terraform or Ansible . Key Responsibilities: Architect and build end-to-end DevOps pipelines using native cloud services (Azure DevOps, AWS CodePipeline, GCP Cloud Build) and third-party tools (Jenkins, GitLab, etc.). Define and implement foundation setup architecture (Azure, GCP and AWS) as per the recommended best practices. Design and deploy secure VPC architectures , manage networking, security groups, load balancers, and VPN gateways. Implement Infrastructure as Code (IaC) using Terraform or Ansible for scalable and repeatable deployments. Establish CI/CD frameworks integrating with Git, containers, and orchestration tools (e.g., Kubernetes, ECS, AKS, GKE). Define and enforce cloud security best practices including IAM, encryption, secrets management, and compliance standards. Collaborate with application, data, and security teams to optimize infrastructure, release cycles, and system performance. Drive continuous improvement in automation, observability, and incident response practices. Must-Have Skills: 10- 5 years of experience in DevOps, Infrastructure, or Cloud Architecture roles. Deep hands-on expertise in Azure , GCP , or AWS cloud platforms (any one is mandatory, more is a bonus). Strong knowledge of VPC architecture , Cloud Security , IAM , and Networking principles . Expertise in Terraform or Ansible for Infrastructure as Code. Experience building resilient CI/CD pipelines and automating application deployments. Strong troubleshooting skills across networking, compute, storage, and containers. Preferred Certifications: Azure DevOps Engineer Expert / AWS Certified DevOps Engineer Professional / Google Professional DevOps Engineer HashiCorp Certified: Terraform Associate (Preferred for Terraform users) Show more Show less

Posted 3 days ago

Apply

2.0 - 5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Linkedin logo

Requirements Proficient in SQL and Linux with hands-on experience. Strong understanding of the Hadoop ecosystem and job scheduling tools like Airflow and Oozie. Skilled in writing and executing SQL queries for comprehensive data validation. Familiarity with test automation frameworks (e. g., Robot Framework), with automation skills as an asset. Basic programming knowledge in Python is a plus. Experience with S3 buckets and cloud storage workflows is advantageous. Strong analytical and problem-solving skills with a high attention to detail. Excellent verbal and written communication abilities. Ability to collaborate effectively in a fast-paced Agile/Scrum environment. Adaptable and eager to learn new tools, technologies, and processes. 2-5 years of experience in Big Data testing, focusing on both automated and manual testing for data validation and UI testing. Proven experience in testing Spark job performance, security, and integration across diverse systems. Hands-on experience with defect tracking tools such as JIRA or Bugzilla. This job was posted by Sushruti Nikumbh from Hoonartek. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Job Title: Principal Data Engineer (MTS4 / Principal Engineer) About the Role As a Principal Data Engineer, you will drive the strategy, architecture, and execution of large-scale data solutions across our function. This role involves tackling highly ambiguous, complex challenges where the business problem may not be fully defined at the outset. You will partner closely with cross-functional teams (Engineering, Product, Operations) to shape and deliver our data roadmap. Your work will have a profound impact on our functions' data capabilities, influencing multiple teams’ technical and product direction. You should bring deep expertise in designing and developing robust data pipelines and platforms, leveraging technologies such as Spark, Airflow, Kafka, and other emerging tools. You will set standards and best practices that raise the bar for engineering excellence across the organization. Key Responsibilities Architect & Define Scope Own end-to-end design of critical data pipelines and platforms in an environment characterized by high ambiguity. Translate loosely defined business objectives into a clear technical plan, breaking down complex problems into achievable milestones. Technology Leadership & Influence Provide thought leadership in data engineering, driving the adoption of Spark, Airflow, Kafka, and other relevant technologies (e.g., Hadoop, Flink, Kubernetes, Snowflake, etc.). Lead design reviews and champion best practices for coding, system architecture, data quality, and reliability. Influence senior stakeholders (Engineers, EMs, Product Managers) on technology decisions and roadmap priorities. Execution & Delivery Spearhead strategic, multi-team projects that advance the organization’s data infrastructure and capabilities. Deconstruct complex architectures into simpler components that can be executed by various teams in parallel. Drive operational excellence, owning escalations and ensuring high availability, scalability, and cost-effectiveness of our data solutions. Mentor and develop engineering talent, fostering a culture of collaboration and continuous learning. Impact & Technical Complexity Shape how the organization operates by introducing innovative data solutions and strategic technical direction. Solve endemic, highly complex data engineering problems with robust, scalable, and cost-optimized solutions. Continuously balance short-term business needs with long-term architectural vision. Process Improvement & Best Practices Set and enforce engineering standards that elevate quality and productivity across multiple teams. Lead by example in code reviews, automation, CI/CD practices, and documentation. Champion a culture of continuous improvement, driving adoption of new tools and methodologies to keep our data ecosystem cutting-edge. Qualifications Education & Experience : Bachelor’s or Master’s degree in Computer Science, Engineering, or related field (or equivalent experience). 5+ years of software/data engineering experience, with significant exposure to large-scale distributed systems. Technical Expertise : Demonstrated proficiency with Spark, Airflow, Kafka, and at least one major programming language (e.g., Python, Scala, Java). Experience with data ecosystem technologies such as Hadoop, Flink, Snowflake, Kubernetes, etc. Proven track record of architecting and delivering highly scalable data infrastructure solutions. Leadership & Communication : Ability to navigate and bring clarity in ambiguous situations. Strong cross-functional collaboration skills, influencing both technical and non-technical stakeholders. Experience coaching and mentoring senior engineers. Problem-Solving : History of tackling complex, ambiguous data challenges and delivering tangible results. Comfort making informed trade-offs between opportunity vs. architectural complexity. Show more Show less

Posted 3 days ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: AI/ML Engineer Location: Pune, India About the Role: We’re looking for highly analytical, technically strong Artificial Intelligence/Machine Learning Engineers to help build scalable, data-driven systems in the digital marketing space. You'll work alongside a top-tier team on impactful solutions affecting billions of users globally. Experience Required: 3 – 7 years Key Responsibilities: Collaborate across Data Science, Ops, and Engineering to tackle large-scale ML challenges. Build and manage robust ML pipelines (ETL, training, deployment) in real-time environments. Optimize models and infrastructure for performance and scalability. Research and implement best practices in ML systems and lifecycle management. Deploy deep learning models using high-performance computing environments. Integrate ML frameworks into cloud/distributed systems. Required Skills: 2+ years of Python development in a programming-intensive role. 1+ year of hands-on ML experience (e.g., Classification, Clustering, Optimization, Deep Learning). 2+ years working with distributed frameworks (Spark, Hadoop, Kubernetes). 2+ years with ML tools such as TensorFlow, PyTorch, Keras, MLlib. 2+ years experience with cloud platforms (AWS, Azure, GCP). Excellent communication skills. Preferred: Prior experience in AdTech or digital advertising platforms (DSP, Ad Exchange, SSP). Education: M.Tech or Ph.D. in Computer Science, Software Engineering, Mathematics, or a related discipline. Why Apply? Join a fast-moving team working on the forefront of AI in advertising. Build technologies that impact billions of users worldwide. Shape the future of programmatic and performance advertising. Show more Show less

Posted 3 days ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Hybrid

Naukri logo

RandomTrees is a leading Data & AI company offering a diverse range of products and services within the data and AI space. We are seeking a skilled Big Data Engineer. As a strategic partner of IBM, we support multiple industries, including Pharma, Banking, Semiconductor, Oil & Gas, and more. Additionally, we are actively engaged in research and innovation in Generative AI (GenAI) and Conversational AI. Headquartered in the United States, we also have offices in Hyderabad and Chennai, India. Job Title: Big Data Engineer Experience: 5-9 Years Location: Hyderabad-Hybrid Employment Type: Full-Time Job Summary: We are seeking a skilled Big Data Engineer with 5-9 years of experience in building and managing scalable data pipelines and analytics solutions. The ideal candidate will have strong expertise in Big Data, Hadoop, Apache Spark, SQL, Hadoop, and Data Lake/Data Warehouse architectures. Experience working with any cloud platform (AWS, Azure, or GCP) is preferred. Required Skills: 59 years of hands-on experience as a Big Data Engineer. Strong proficiency in Apache Spark (PySpark or Scala). Solid understanding and experience with SQL and database optimization. Experience with data lake or data warehouse environments and architecture patterns. Good understanding of data modeling, performance tuning, and partitioning strategies. Experience in working with large-scale distributed systems and batch/stream data processing. Preferred Qualifications: Experience with cloud platforms preferably GCP or AWS, Azure. Education: Bachelors degree in Computer Science, Engineering, or a related field.

Posted 3 days ago

Apply

6.0 - 8.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do In this vital role We are seeking a highly skilled and hands-on Senior Software Engineer Search to drive the development of intelligent, scalable search systems across our pharmaceutical organization. You'll work at the intersection of software engineering, AI, and life sciences to enable seamless access to structured and unstructured contentspanning research papers, clinical trial data, regulatory documents, and internal scientific knowledge. This is a high-impact role where your code directly accelerates innovation and decision-making in drug development and healthcare delivery Design, implement, and optimize search services using technologies such as Elasticsearch, OpenSearch, Solr, or vector search frameworks. Collaborate with data scientists and analysts to deliver data models and insights. Develop custom ranking algorithms, relevancy tuning, and semantic search capabilities tailored to scientific and medical content Support the development of intelligent search features like query understanding, question answering, summarization, and entity recognition Build and maintain robust, cloud-native APIs and backend services to support high-availability search infrastructure (e.g., AWS, GCP, Azure Implement CI/CD pipelines, observability, and monitoring for production-grade search systems Work closely with Product Owners, Tech Architect. Enable indexing of both structured (e.g., clinical trial metadata) and unstructured (e.g., PDFs, research papers) content Design & develop modern data management tools to curate our most important data sets, models and processes, while identifying areas for process automation and further efficiencies Expertise in programming languages such as Python, Java, React, typescript, or similar. Strong experience with data storage and processing technologies (e.g., Hadoop, Spark, Kafka, Airflow, SQL/NoSQL databases). Demonstrate strong initiative and ability to work with minimal supervision or direction Strong experience with cloud infrastructure (AWS, Azure, or GCP) and infrastructure as code like Terraform In-depth knowledge of relational and columnar SQL databases, including database design Expertise in data warehousing concepts (e.g. star schema, entitlement implementations, SQL v/s NoSQL modeling, milestoning, indexing, partitioning) Experience in REST and/or GraphQL Experience in creating Spark jobs for data transformation and aggregation Experience with distributed, multi-tiered systems, algorithms, and relational databases. Possesses strong rapid prototyping skills and can quickly translate concepts into working code Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Analyze and understand the functional and technical requirements of applications Identify and resolve software bugs and performance issues Work closely with multi-functional teams, including product management, design, and QA, to deliver high-quality software on time Maintain detailed documentation of software designs, code, and development processes Basic Qualifications: Degree in computer science & engineering preferred with 6-8 years of software development experience Proficient in Databricks, Data engineering, Python, Search algorithms using NLP/AI models, GCP Cloud services, GraphQL Hands-on experience with search technologies (Elasticsearch, Solr, OpenSearch, or Lucene). Hands on experience with Full Stack software development. Proficient in programming languages, Java, Python, Fast Python, Databricks/RDS, Data engineering, S3Buckets, ETL, Hadoop, Spark, airflow, AWS Lambda Experience with data streaming frameworks (Apache Kafka, Flink). Experience with cloud platforms (AWS, Azure, Google Cloud) and related services (e.g., S3, Redshift, Big Query, Databricks) Hands on experience with various cloud services, understand pros and cons of various cloud services in well architected cloud design principles Working knowledge of open-source tools such as AWS lambda. Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Preferred Qualifications: Experience in Python, Java, React, Fast Python, Typescript, JavaScript, CSS HTML is desirable Experienced with API integration, serverless, microservices architecture. Experience in Data bricks, PySpark, Spark, SQL, ETL, Kafka Solid understanding of data governance, data security, and data quality best practices Experience with Unit Testing, Building and Debugging the Code Experienced with AWSAzure Platform, Building and deploying the code Experience in vector database for large language models, Databricks or RDS Experience with DevOps CICD build and deployment pipeline Experience in Agile software development methodologies Experience in End-to-End testing Experience in additional Modern Database terminologies. Good to Have Skills Willingness to work on AI Applications Experience in MLOps, React, JavaScript, Java, GCP Search Engines Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global teams. High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Greetings from TCS!!! TCS has been a great pioneer in feeding the fire of young Techies like you. We are a global leader in the technology arena and there’s nothing that can stop us from growing together. Experience - 5+ years Location - Pune • Expertise with Big Data, Hadoop ecosystem, Spark(Scala / Java). • Experience in working with large cloud data lakes. • Experience with large-scale data processing, complex event processing, stream processing. • Experience in working with CI/CD pipelines, source code repositories, and operating environments. • Experience in working with both structured and unstructured data, with a high degree of SQL knowledge. • Experience designing and implementing scalable ETL/ELT processes and modeling data for low latency reporting • Experience in performance tuning, troubleshooting and diagnostics, process monitoring, and profiling. • Understanding containerization, virtualization, and cloud computing. • Object-oriented programming and component-based development with Java. • Experience working in the Scrum Agile software development framework. • Ability to work in a fast-paced environment with evolving requirements and capability goals. Show more Show less

Posted 3 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position: Solution Architect Location: Chennai/ Bangalore/ Kuala Lumpur Experience: 8+ years Employment Type: Full-time Job Overview Join Moving Walls, a trailblazer in the Out-of-Home (OOH) advertising and AdTech ecosystem, as a Solution Architect. This pivotal role places you at the heart of our innovative journey, designing and implementing scalable, efficient, and transformative solutions for our award-winning platforms like LMX and MAX . With a focus on automating and enhancing media transactions, you’ll enable a seamless connection between media buyers and sellers in a rapidly evolving digital-first landscape. As a Solution Architect, you will bridge the gap between business objectives and technical execution, working in an Agile environment with POD-based execution models to ensure ownership and accountability. You will drive initiatives that revolutionize the way data and technology shape OOH advertising. Why Join Us? ● Innovative Vision: Be part of a team committed to "Creating the Future of Outernet Media", where every solution impacts global markets across Asia, ANZ, Africa, and more. ● Cutting-edge Projects: Work on features like programmatic deal automation, data-driven audience insights, and dynamic campaign management for platforms connecting billions of ad impressions. ● Collaborative Culture: Collaborate with multidisciplinary teams, including Sales, Product Management, and Engineering, to craft solutions that are customized and impactful. What You’ll Do: ● Architect scalable and innovative solutions for AdTech products, ensuring alignment with organizational goals and market needs. ● Collaborate with cross-functional teams to gather, analyze, and translate business requirements into technical designs. ● Lead the development of programmatic solutions, dynamic audience segmentation tools, and integrations for global markets. ● Enhance existing products by integrating advanced features like dynamic rate cards, bid management, and inventory mapping. ● Advocate for best practices in system design, ensuring the highest standards of security, reliability, and performance. What You Bring: ● A strong technical background with hands-on experience in cloud-based architectures, API integrations, and data analytics. ● Proven expertise in working within an Agile environment and leading POD-based teams to deliver high-impact results. ● Passion for AdTech innovation and the ability to navigate complex, fast-paced environments. ● Excellent problem-solving skills, creativity, and a customer-centric mindset. Key Responsibilities 1. Solution Design: ○ Develop end-to-end solution architectures for web, mobile, and cloud-based platforms using the specified tech stack. ○ Translate business requirements into scalable and reliable technical solutions. 2. Agile POD-Based Execution: ○ Collaborate with cross-functional POD teams (Product, Engineering, QA, and Operations) to deliver iterative and focused solutions. ○ Ensure clear ownership of deliverables within the POD, fostering accountability and streamlined execution. ○ Contribute to defining and refining the POD stages to ensure alignment with organizational goals. 3. Collaboration and Stakeholder Management: ○ Work closely with product, engineering, and business teams to define technical requirements. ○ Lead technical discussions with internal and external stakeholders. 4. Technical Expertise: ○ Provide architectural guidance and best practices for system integrations, APIs, and microservices. ○ Ensure solutions meet non-functional requirements like scalability, reliability, and security. 5. Documentation: ○ Prepare and maintain architectural documentation, including solution blueprints and workflows. ○ Create technical roadmaps and detailed design documentation. 6. Mentorship: ○ Guide and mentor engineering teams during development and deployment phases. ○ Review code and provide technical insights to improve quality and performance. 7. Innovation and Optimization: ○ Identify areas for technical improvement and drive innovation in solutions. ○ Evaluate emerging technologies to recommend the best tools and frameworks. Required Skills and Qualifications ● Bachelor’s/Master’s degree in Computer Science, Information Technology, or a related field. ● Proven experience as a Solution Architect or a similar role. ● Expertise in programming languages and frameworks: Java, Angular, Python, C++ ● Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. ● Experience in deploying AI models in production, including optimizing for performance and scalability. ● Understanding of deep learning, NLP, computer vision, or generative AI techniques. ● Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. ● Strong knowledge of enterprise architecture frameworks (TOGAF, Zachman, etc.). ● Expertise in distributed systems, microservices, and cloud-native architectures. ● Experience in API design, data pipelines, and integration of AI services within existing systems. ● Strong knowledge of databases: MongoDB, SQL, NoSQL. ● Proficiency in working with large-scale datasets, data wrangling, and ETL pipelines. ● Hands-on experience with CI/CD pipelines for AI development. ● Version control systems like Git and experience with ML lifecycle tools such as MLflow or DVC. ● Proven track record of leading AI-driven projects from ideation to deployment. ● Hands-on experience with cloud platforms (AWS, Azure, GCP) for deploying AI solutions. ● Familiarity with Agile methodologies, especially POD-based execution models. ● Strong problem-solving skills and ability to design scalable solutions. ● Excellent communication skills to articulate technical solutions to stakeholders. Preferred Qualifications ● Experience in e-commerce, Adtech or OOH (Out-of-Home) advertising technology. ● Knowledge of tools like Jira, Confluence, and Agile frameworks like Scrum or Kanban. ● Certification in cloud technologies (e.g., AWS Solutions Architect). Tech Stack ● Programming Languages: Java, Python or C++ ● Frontend Framework: Angular ● Database Technologies: MongoDB, SQL, NoSQL ● Cloud Platform: AWS ● Familiarity with data processing tools like Pandas, NumPy, and big data frameworks (e.g., Hadoop, Spark). ● Experience with cloud platforms for AI (AWS SageMaker, Azure ML, Google Vertex AI). ● Understanding of APIs, microservices, and containerization tools like Docker and Kubernetes. Share your profile to kushpu@movingwalls.com Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Title: Machine Learning Engineer Experience: 5-7 years Location: Bangalore Key Responsibilities: - Design, develop, and deploy machine learning models and algorithms using Python - Collaborate with cross-functional teams to define project requirements and deliverables - Analyze large datasets to extract insights and inform decision-making - Implement and optimize machine learning algorithms using frameworks like TensorFlow, PyTorch, and scikit-learn - Evaluate model performance, conduct A/B testing, and iteratively improve model accuracy and efficiency Required Skills: - Technical Skills: - Proficiency in Python programming - Experience with machine learning frameworks (TensorFlow, PyTorch, Keras) - Strong understanding of data analysis techniques and statistical methods - Familiarity with data engineering tools (SQL, Hadoop, Spark) - Soft Skills: - Excellent problem-solving skills and attention to detail - Strong communication skills to convey complex technical concepts effectively Preferred Skills: - Experience with cloud platforms (AWS, Azure, GCP) and MLOps practices - Familiarity with Agile methodologies and DevOps practices - Knowledge of specific domains like natural language processing (NLP) or computer vision Education: - Bachelor's or Master's degree in Computer Science, Mathematics, or a related field Show more Show less

Posted 3 days ago

Apply

5.0 - 8.0 years

1 Lacs

Hyderābād

On-site

GlassDoor logo

Assistant / Deputy Manager Hyderabad B.E./MCA/B.Tech/M.sc (I.T.) 25-35 Experience & Role: Experience – above 5-8 years relevant experience. Role - We are looking for a technically strong and detail-oriented professional to manage and support our Cloudera Data Platform (CDP) ecosystem. The ideal candidate should possess in-depth expertise in distributed data processing frameworks and hands-on experience with core Hadoop components. This role requires both operational excellence and technical depth, with an emphasis on optimizing data processing pipelines and maintaining high system availability. Job Description: - Administer and maintain the Cloudera Data Platform (CDP) across all environments (dev/test/prod) - Strong expertise in Big Data ecosystem like Spark, Hive, Sqoop, HDFS, Map Reduce, Oozie, Yarn, HBase, Nifi. - Develop and optimize complex Hive queries, including the use of analytical functions for reporting and data transformation. - Create custom UDFs in Hive to handle specific business logic and integration needs. - Ensure efficient data ingestion and movement using Sqoop, Nifi, and Oozie workflows. - Work with various data formats (CSV, TSV, Parquet, ORC, JSON, AVRO) and compression techniques (Gzip, Snappy) to maximize performance and storage. - Monitor and tune performance of YARN and Spark applications for optimal resource utilization. - In depth Knowledge on Architecture of Distributed Systems and Parallel Computing. Internal - Good knowledge in Oracle PL/SQL and shell scripting. - Strong problem-solving and analytical thinking. - Effective communication and documentation skills. - Ability to collaborate across multi-disciplinary teams. - Self-driven with the ability to manage multiple priorities under tight timelines. Job Types: Full-time, Permanent Pay: Up to ₹100,000.00 per year Schedule: Day shift Monday to Friday Work Location: In person

Posted 3 days ago

Apply

3.0 - 10.0 years

5 - 18 Lacs

India

On-site

GlassDoor logo

Overview: We are looking for a skilled GCP Data Engineer with 3 to 10 years of real hands-on experience in data ingestion, data engineering, data quality, data governance, and cloud data warehouse implementations using GCP data services. The ideal candidate will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment. Key Responsibilities:  Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs.  Develop and maintain data ingestion frameworks and pipelines from various data sources using GCP services.  Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements.  Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows.  Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services.  Develop and implement data and semantic interoperability specifications.  Work closely with business teams to define and scope requirements.  Analyze existing systems to identify appropriate data sources and drive continuous improvement.  Implement and continuously enhance automation processes for data ingestion and data transformation.  Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines.  Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Skills and Qualifications:  Overall 3-10 years of hands-on experience as a Data Engineer, with at least 2-3 years of direct GCP Data Engineering experience.  Strong SQL and Python development skills are mandatory.  Solid experience in data engineering, working with distributed architectures, ETL/ELT, and big data technologies.  Demonstrated knowledge and experience with Google Cloud BigQuery is a must.  Experience with DataProc and Dataflow is highly preferred.  Strong understanding of serverless data warehousing on GCP and familiarity with DWBI modeling frameworks.  Extensive experience in SQL across various database platforms.  Any BI tools Experience is also preferred.  Experience in data mapping and data modeling.  Familiarity with data analytics tools and best practices.  Hands-on experience with one or more programming/scripting languages such as Python, JavaScript, Java, R, or UNIX Shell.  Practical experience with Google Cloud services including but not limited to: o BigQuery, BigTable o Cloud Dataflow, Cloud Dataproc o Cloud Storage, Pub/Sub o Cloud Functions, Cloud Composer o Cloud Spanner, Cloud SQL  Knowledge of modern data mining, cloud computing, and data management tools (such as Hadoop, HDFS, and Spark).  Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc.  Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP.  GCP Data Engineer Certification is highly preferred. Job Type: Full-time Pay: ₹500,298.14 - ₹1,850,039.92 per year Benefits: Health insurance Schedule: Rotational shift Work Location: In person

Posted 3 days ago

Apply

8.0 years

28 - 30 Lacs

Hyderābād

On-site

GlassDoor logo

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Global Technology Solutions (GTS) at ResMed is a division dedicated to creating innovative, scalable, and secure platforms and services for patients, providers, and people across ResMed. The primary goal of GTS is to accelerate well-being and growth by transforming the core, enabling patient, people, and partner outcomes, and building future-ready operations. The strategy of GTS focuses on aligning goals and promoting collaboration across all organizational areas. This includes fostering shared ownership, developing flexible platforms that can easily scale to meet global demands, and implementing global standards for key processes to ensure efficiency and consistency. Role Overview As a Data Engineering Lead, you will be responsible for overseeing and guiding the data engineering team in developing, optimizing , and maintaining our data infrastructure. You will play a critical role in ensuring the seamless integration and flow of data across the organization, enabling data-driven decision-making and analytics. Key Responsibilities Data Integration: Coordinate with various teams to ensure seamless data integration across the organization's systems. ETL Processes: Develop and implement efficient data transformation and ETL (Extract, Transform, Load) processes. Performance Optimization: Optimize data flow and system performance for enhanced functionality and efficiency. Data Security: Ensure adherence to data security protocols and compliance standards to protect sensitive information. Infrastructure Management: Oversee the development and maintenance of the data infrastructure, ensuring scalability and reliability. Collaboration: Work closely with data scientists, analysts, and other stakeholders to support data-driven initiatives. Innovation: Stay updated with the latest trends and technologies in data engineering and implement best practices. Qualifications Experience: Proven experience in data engineering, with a strong background in leading and managing teams. Technical Skills: Proficiency in programming languages such as Python, Java, and SQL, along with experience in big data technologies like Hadoop, Spark, and Kafka. Data Management: In-depth understanding of data warehousing, data modeling, and database management systems. Analytical Skills: Strong analytical and problem-solving skills with the ability to handle complex data challenges. Communication: Excellent communication and interpersonal skills, capable of working effectively with cross-functional teams. Education: Bachelor's or Master's degree in Computer Science , Engineering, or a related field. Why Join Us? Work on cutting-edge data projects and contribute to the organization's data strategy. Collaborative and innovative work environment that values creativity and continuous learning. If you are a strategic thinker with a passion for data engineering and leadership, we would love to hear from you. Apply now to join our team and make a significant impact on our data-driven journey. #LI-India Joining us is more than saying “yes” to making the world a healthier place. It’s discovering a career that’s challenging, supportive and inspiring. Where a culture driven by excellence helps you not only meet your goals, but also create new ones. We focus on creating a diverse and inclusive culture, encouraging individual expression in the workplace and thrive on the innovative ideas this generates. If this sounds like the workplace for you, apply now! We commit to respond to every applicant.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Digital Solutions Consultant I - HYD015Q Company : Worley Primary Location : IND-AP-Hyderabad Job : Digital Solutions Schedule : Full-time Employment Type : Agency Contractor Job Level : Experienced Job Posting : Jun 16, 2025 Unposting Date : Jul 16, 2025 Reporting Manager Title : Senior General Manager : We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals, and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Digital Solutions Consultant with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are looking for a skilled Data Engineer to join our Digital Customer Solutions team. The ideal candidate should have experience in cloud computing and big data technologies. As a Data Engineer, you will be responsible for designing, building, and maintaining scalable data solutions that can handle large volumes of data. You will work closely with stakeholders to ensure that the data is accurate, reliable, and easily accessible. Responsibilities: Design, build, and maintain scalable data pipelines that can handle large volumes of data. Document design of proposed solution including structuring data (data modelling applying different techniques including 3-NF and Dimensional modelling) and optimising data for further consumption (working closely with Data Visualization Engineers, Front-end Developers, Data Scientists and ML-Engineers). Develop and maintain ETL processes to extract data from various sources (including sensor, semi-structured and unstructured, as well as structured data stored in traditional databases, file stores or from SOAP and REST data interfaces). Develop data integration patterns for batch and streaming processes, including implementation of incremental loads. Build quick porotypes and prove-of-concepts to validate assumption and prove value of proposed solutions or new cloud-based services. Define Data engineering standards and develop data ingestion/integration frameworks. Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications. Develop and maintain cloud-based infrastructure to support data processing using Azure Data Services (ADF, ADLS, Synapse, Azure SQL DB, Cosmos DB). Develop and maintain automated data quality pipelines. Collaborate with cross-functional teams to identify opportunities for process improvement. Manage a team of Data Engineers. About You To be considered for this role it is envisaged you will possess the following attributes: Bachelor’s degree in Computer Science or related field. 7+ years of experience in big data technologies such as Hadoop, Spark, Hive & Delta Lake. 7+ years of experience in cloud computing platforms such as Azure, AWS or GCP. Experience in working in cloud Data Platforms, including deep understanding of scaled data solutions. Experience in working with different data integration patterns (batch and streaming), implementing incremental data loads. Proficient in scripting in Java, Windows and PowerShell. Proficient in at least one programming language like Python, Scala. Expert in SQL. Proficient in working with data services like ADLS, Azure SQL DB, Azure Synapse, Snowflake, No-SQL (e.g. Cosmos DB, Mongo DB), Azure Data Factory, Databricks or similar on AWS/GCP. Experience in using ETL tools (like Informatica IICS Data integration) is an advantage. Strong understanding of Data Quality principles and experience in implementing those. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley.

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job requisition ID :: 84234 Date: Jun 15, 2025 Location: Delhi Designation: Senior Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 3 days ago

Apply

175.0 years

4 - 6 Lacs

Gurgaon

On-site

GlassDoor logo

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you’ll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express Smart Monitoring is an industry-leading and an award-winning Risk Monitoring/Control Testing platform owned and managed by the Global Risk Compliance and it leverages high technology, automation, and data science to detect, predict and prevent risks. Its patent-pending approach uniquely combines advances in data science and technology (AI, machine learning, cloud computing) to transform risk management. The Smart Monitoring Center of Excellence is a comprised of group of experts that leverage the Smart Monitoring platform to build and manage Key Risk Indicators (KRIs) and Automated Control Tests (ACTs) that monitor risks and detect control failure across AXP, supporting Business Units and Staff Groups, Product Lines and Processes. Smart Monitoring Center of Excellence team supports the businesses with a mission to enable business growth and objectives while maintaining a strong control environment. We are seeking a Data Scientist to join this exciting opportunity to grow Smart Monitoring COE multi-folds. As a member of SM COE, the incumbent will be responsible for identifying opportunities to apply new and innovative ways to monitor risks through KRIs/ACTs and execute appropriate strategies in partnership with Business, OE, Compliance, and other stakeholder teams. Key activities for the role will include: Lead the design and implementation of NLP & GenAI based solutions for real time identification of Key Risk Indicators. Owning the architecture and roadmap of the models and tools from ideation to productionizing Lead a team of data scientists, providing mentorship, performance coaching and technical guidance to build domain depth and deliver excellence Champion governance, interpretability of models from validation point of view Lead R&D efforts to leverage external data (social forums, etc.) to generate insights for operational/compliance risks Provide rigorous analytics solutions to support critical business functions and support machine learning solutions prototyping Collaborate with Model consumers, data Engineers, and all related stakeholders to ensure precise implementation of solutions Qualifications: Masters/PhD in a quantitative field (Computer Science, Statistics, Mathematics, Operation Research, etc.) with hands-on experience leveraging sophisticated analytical and machine learning techniques. Strong preference for candidates with 5-6+ years of working experience driving business results Demonstrated ability to frame business problems into machine learning problems, leverage external thinking and tools (from academia and/or other industries) to engineer a solvable solution to deliver business insights and optimal control policy Creativity to go beyond the status-quo to construct and deliver the best solution to the problem, ability and comfort with working independently and making key decisions on projects Deep understanding of machine learning/statistical algorithms such as time series analysis and outlier detection, neural networks/deep learning, boosting and reinforcement learning. Experience with data visualization a plus Expertise in an analytical language (Python, R, or the equivalent), and experience with databases (GCP, SQL, or the equivalent) Prior experience working with Big Data tools and platforms (Hadoop, Spark, or the equivalent) Experience in building NLP solutions and/or GEN AI are strongly preferred Self-motivated with the ability to operate independently and handle multiple workstreams and ad-hoc tasks simultaneously. Team player with strong relationship building, management and influencing skills Strong verbal and written communication skills American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations

Posted 3 days ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies