Home
Jobs
Companies
Resume

73 Apache Nifi Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :Overall 7+ years of experience In Industry including 4 Years of experience As Developer using Big Data Technologies like Databricks/Spark and Hadoop Ecosystems - Hands on experience on Unified Data Analytics with Databricks, Databricks Workspace User Interface, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL - Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Up cloud platform with Databricks, Databricks Workspace- Working knowledge on distributed processing, data warehouse concepts, NoSQL, huge amount of data processing, RDBMS, Testing, Data management principles, Data mining and Data modellingAs a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Develop and maintain data pipelines using Databricks Unified Data Analytics Platform.- Troubleshoot and resolve issues related to data pipelines and data platform components.- Ensure data quality and integrity by implementing data validation and testing procedures. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Must To Have Skills: Strong understanding of data modeling and database design principles.- Good To Have Skills: Experience with Apache Spark and Hadoop.- Good To Have Skills: Experience with cloud-based data platforms such as AWS or Azure.- Proficiency in programming languages such as Python or Java.- Experience with data integration and ETL tools such as Apache NiFi or Talend. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in computer science, software engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Chennai, Bengaluru, Hyderabad and Pune office. Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 4 days ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :Overall 7+ years of experience In Industry including 4 Years of experience As Developer using Big Data Technologies like Databricks/Spark and Hadoop Ecosystems - Hands on experience on Unified Data Analytics with Databricks, Databricks Workspace User Interface, Managing Databricks Notebooks, Delta Lake with Python, Delta Lake with Spark SQL - Good understanding of Spark Architecture with Databricks, Structured Streaming. Setting Up cloud platform with Databricks, Databricks Workspace- Working knowledge on distributed processing, data warehouse concepts, NoSQL, huge amount of data processing, RDBMS, Testing, Data management principles, Data mining and Data modellingAs a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Develop and maintain data pipelines using Databricks Unified Data Analytics Platform.- Troubleshoot and resolve issues related to data pipelines and data platform components.- Ensure data quality and integrity by implementing data validation and testing procedures. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Must To Have Skills: Strong understanding of data modeling and database design principles.- Good To Have Skills: Experience with Apache Spark and Hadoop.- Good To Have Skills: Experience with cloud-based data platforms such as AWS or Azure.- Proficiency in programming languages such as Python or Java.- Experience with data integration and ETL tools such as Apache NiFi or Talend. Additional Information:- The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in computer science, software engineering, or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Chennai, Bengaluru, Hyderabad and Pune office. Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education

Posted 4 days ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Pune

Work from Office

Naukri logo

Req ID: 324609 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred

Posted 6 days ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Pune

Work from Office

Naukri logo

Req ID: 324653 We are currently seeking a Data Engineer to join our team in Pune, Mahrshtra (IN-MH), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred

Posted 6 days ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Req ID: 324631 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred

Posted 6 days ago

Apply

4.0 - 9.0 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Req ID: 324632 We are currently seeking a Data Engineer to join our team in Chennai, Tamil Ndu (IN-TN), India (IN). Key Responsibilities: Design and implement tailored data solutions to meet customer needs and use cases, spanning from streaming to data lakes, analytics, and beyond within a dynamically evolving technical stack. Provide thought leadership by recommending the most appropriate technologies and solutions for a given use case, covering the entire spectrum from the application layer to infrastructure. Demonstrate proficiency in coding skills, utilizing languages such as Python, Java, and Scala to efficiently move solutions into production while prioritizing performance, security, scalability, and robust data integrations. Collaborate seamlessly across diverse technical stacks, including Cloudera, Databricks, Snowflake, and AWS. Develop and deliver detailed presentations to effectively communicate complex technical concepts. Generate comprehensive solution documentation, including sequence diagrams, class hierarchies, logical system views, etc. Adhere to Agile practices throughout the solution development process. Design, build, and deploy databases and data stores to support organizational requirements. Basic Qualifications: 4+ years of experience supporting Software Engineering, Data Engineering, or Data Analytics projects. 2+ years of experience leading a team supporting data related projects to develop end-to-end technical solutions. Experience with Informatica, Python, Databricks, Azure Data Engineer Ability to travel at least 25%." Preferred Skills: Demonstrate production experience in core data platforms such as Snowflake, Databricks, AWS, Azure, GCP, Hadoop, and more. Possess hands-on knowledge of Cloud and Distributed Data Storage, including expertise in HDFS, S3, ADLS, GCS, Kudu, ElasticSearch/Solr, Cassandra, or other NoSQL storage systems. Exhibit a strong understanding of Data integration technologies, encompassing Informatica, Spark, Kafka, eventing/streaming, Streamsets, NiFi, AWS Data Migration Services, Azure DataFactory, Google DataProc. Showcase professional written and verbal communication skills to effectively convey complex technical concepts. Undergraduate or Graduate degree preferred

Posted 6 days ago

Apply

15.0 - 20.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

LocationMumbai Experience15+ years in data engineering/architecture Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable

Posted 1 week ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 3-15 Years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Mumbai

Work from Office

Naukri logo

Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 1 week ago

Apply

3.0 - 5.0 years

3 - 7 Lacs

Gurugram

Work from Office

Naukri logo

About the Opportunity Job TypeApplication 23 June 2025 Title Expert Engineer Department GPS Technology Location Gurugram, India Reports To Project Manager Level Grade 4 Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved thisBy working together - and supporting each other - all over the world. So, join our [insert name of team/ business area] team and feel like your part of something bigger. About your team The Technology function provides IT services to the Fidelity International business, globally. These include the development and support of business applications that underpin our revenue, operational, compliance, finance, legal, customer service and marketing functions. The broader technology organisation incorporates Infrastructure services that the firm relies on to operate on a day-to-day basis including data centre, networks, proximity services, security, voice, incident management and remediation. About your role Expert engineer is a seasoned technology expert who is highly skilled in programming, engineering and problem-solving skills. They can deliver value to business faster and with superlative quality. Their code and designs meet business, technical, non-functional and operational requirements most of the times without defects and incidents. So, if relentless focus and drive towards technical and engineering excellence along with adding value to business excites you, this is absolutely a role for you. If doing technical discussions and whiteboarding with peers excites you and doing pair programming and code reviews adds fuel to your tank, come we are looking for you. Understand system requirements, analyse, design, develop and test the application systems following the defined standards. The candidate is expected to display professional ethics in his/her approach to work and exhibit a high-level ownership within a demanding working environment. About you Essential Skills You have excellent software designing, programming, engineering, and problem-solving skills. Strong experience working on Data Ingestion, Transformation and Distribution using AWS or Snowflake Exposure to SnowSQL, Snowpipe, Role based access controls, ETL / ELT tools like Nifi, Matallion / DBT Hands on working knowledge around EC2, Lambda, ECS/EKS, DynamoDB, VPCs Familiar with building data pipelines that leverage the full power and best practices of Snowflake as well as how to integrate common technologies that work with Snowflake (code CICD, monitoring, orchestration, data quality, monitoring) Experience with designing, implementing, and overseeing the integration of data systems and ETL processes through Snaplogic Designing Data Ingestion and Orchestration Pipelines using AWS, Control M Establish strategies for data extraction, ingestion, transformation, automation, and consumption. Experience in Data Lake Concepts with Structured, Semi-Structured and Unstructured Data Experience in creating CI/CD Process for Snowflake Experience in strategies for Data Testing, Data Quality, Code Quality, Code Coverage Ability, willingness & openness to experiment / evaluate / adopt new technologies. Passion for technology, problem solving and team working. Go getter, ability to navigate across roles, functions, business units to collaborate, drive agreements and changes from drawing board to live systems. Lifelong learner who can bring the contemporary practices, technologies, ways of working to the organization. Effective collaborator adept at using all effective modes of communication and collaboration tools. Experience delivering on data related Non-Functional like- Hands-on experience dealing with large volumes of historical data across markets/geographies. Manipulating, processing, and extracting value from large, disconnected datasets. Building water-tight data quality gateson investment management data Generic handling of standard business scenarios in case of missing data, holidays, out of tolerance errorsetc. Experience and Qualification B.E./ B.Tech. or M.C.A. in Computer Science from a reputed University Total 7 to 10 years of relevant experience Personal Characteristics Good interpersonal and communication skills. Strong team player Ability to work at a strategic and tactical level. Ability to convey strong messages in a polite but firm manner. Self-motivation is essential, should demonstrate commitment to high quality design and development. Ability to develop & maintain working relationships with several stakeholders. Flexibility and an open attitude to change. Problem solving skills with the ability to think laterally, and to think with a medium term and long-term perspective. Ability to learn and quickly get familiar with a complex business and technology environment. Feel rewarded For starters, well offer you a comprehensive benefits package. Well value your wellbeing and support your development. And well be as flexible as we can about where and when you work finding a balance that works for all of us. Its all part of our commitment to making you feel motivated by the work you do and happy to be part of our team.

Posted 1 week ago

Apply

10.0 - 15.0 years

25 - 40 Lacs

Mumbai

Work from Office

Naukri logo

Overview of the Company: Jio Platforms Ltd. is a revolutionary Indian multinational tech company, often referred to as India's biggest startup, headquartered in Mumbai. Launched in 2019, it's the powerhouse behind Jio, India's largest mobile network with over 400 million users. But Jio Platforms is more than just telecom. It's a comprehensive digital ecosystem, developing cutting-edge solutions across media, entertainment, and enterprise services through popular brands like JioMart, JioFiber, and JioSaavn. Join us at Jio Platforms and be part of a fast-paced, dynamic environment at the forefront of India's digital transformation. Collaborate with brilliant minds to develop next-gen solutions that empower millions and revolutionize industries. Team Overview: The Data Platforms Team is the launchpad for a data-driven future, empowering the Reliance Group of Companies. We're a passionate group of experts architecting an enterprise-scale data mesh to unlock the power of big data, generative AI, and ML modelling across various domains. We don't just manage data we transform it into intelligent actions that fuel strategic decision-making. Imagine crafting a platform that automates data flow, fuels intelligent insights, and empowers the organization that's what we do. Join our collaborative and innovative team, and be a part of shaping the future of data for India's biggest digital revolution! About the role. Title : Lead Data Engineer Location: Mumbai Responsibilities: End-to-End Data Pipeline Development: Design, build, optimize, and maintain robust data pipelines across cloud, on-premises, or hybrid environments, ensuring performance, scalability, and seamless data flow. Reusable Components & Frameworks: Develop reusable data pipeline components and contribute to the team's data pipeline framework evolution. Data Architecture & Solutions: Contribute to data architecture design, applying data modelling, storage, and retrieval expertise. Data Governance & Automation: Champion data integrity, security, and efficiency through metadata management, automation, and data governance best practices. Collaborative Problem Solving: Partner with stakeholders, data teams, and engineers to define requirements, troubleshoot, optimize, and deliver data-driven insights. Mentorship & Knowledge Transfer: Guide and mentor junior data engineers, fostering knowledge sharing and professional growth. Qualification Details: Education: Bachelor's degree or higher in Computer Science, Data Science, Engineering, or a related technical field. Core Programming: Excellent command of a primary data engineering language (Scala, Python, or Java) with a strong foundation in OOPS and functional programming concepts. Big Data Technologies: Hands-on experience with data processing frameworks (e.g., Hadoop, Spark, Apache Hive, NiFi, Ozone, Kudu), ideally including streaming technologies (Kafka, Spark Streaming, Flink, etc.). Database Expertise: Excellent querying skills (SQL) and strong understanding of relational databases (e.g., MySQL, PostgreSQL). Experience with NoSQL databases (e.g., MongoDB, Cassandra) is a plus. End-to-End Pipelines: Demonstrated experience in implementing, optimizing, and maintaining complete data pipelines, integrating varied sources and sinks including streaming real-time data. Cloud Expertise: Knowledge of Cloud Technologies like Azure HDInsights, Synapse, EventHub and GCP DataProc, Dataflow, BigQuery. CI/CD Expertise: Experience with CI/CD methodologies and tools, including strong Linux and shell scripting skills for automation. Desired Skills & Attributes: Problem-Solving & Troubleshooting: Proven ability to analyze and solve complex data problems, troubleshoot data pipeline issues effectively. Communication & Collaboration: Excellent communication skills, both written and verbal, with the ability to collaborate across teams (data scientists, engineers, stakeholders). Continuous Learning & Adaptability: A demonstrated passion for staying up-to-date with emerging data technologies and a willingness to adapt to new tools.

Posted 1 week ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action

Posted 2 weeks ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: • Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka • Write and optimize PySpark/Scala code for data processing • Design test cases for data validation, quality, and integrity • Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT • Collaborate with data engineers, analysts, and QA teams Key Skills: • Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka • Proficient in PySpark, Scala, or Java • Experience in data testing, ETL validation, and data quality checks • Familiarity with SQL, NoSQL, and data lakes • Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Experience - 6 to 12 years Key Responsibilities: • Develop complex SQL queries, stored procedures, and functions • Optimize query performance and database indexing • Manage backups, replication, and security • Monitor and tune database performance • Support schema design and data migrations Key Skills: • Strong hands-on experience with PostgreSQL • Proficient in SQL, PL/pgSQL scripting • Experience in performance tuning, query optimization, and indexing • Familiarity with logical replication, partitioning, and extensions • Exposure to tools like pgAdmin, psql, or PgBouncer

Posted 2 weeks ago

Apply

2.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

There is a need for a resource (proficient) for a Data Engineer with experience monitoring and fixing jobs for data pipelines written in Azure data Factory and Python Design and implement data models for Snowflake to support analytical solutions. Develop ETL processes to integrate data from various sources into Snowflake. Optimize data storage and query performance in Snowflake. Collaborate with cross-functional teams to gather requirements and deliver scalable data solutions. Monitor and maintain Snowflake environments, ensuring optimal performance and data security. Create documentation for data architecture, processes, and best practices. Provide support and training for teams utilizing Snowflake services. Roles and Responsibilities Strong experience with Snowflake architecture and data warehousing concepts. Proficiency in SQL for data querying and manipulation. Familiarity with ETL tools such as Talend, Informatica, or Apache NiFi. Experience with data modeling techniques and tools. Knowledge of cloud platforms, specifically AWS, Azure, or Google Cloud. Understanding of data governance and compliance requirements. Excellent analytical and problem-solving skills. Strong communication and collaboration skills to work effectively within a team. Experience with Python or Java for data pipeline development is a plus.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

20 - 27 Lacs

Gurugram

Work from Office

Naukri logo

The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities Strong technical, analytical, and problem-solving skills Strong organizational skills, with the ability to work autonomously as well as in a team-based environment Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, CDH On-premise for data processing and extraction Ability to own and deliver on large, multi-faceted projects Fluency in complex SQL and experience with RDBMSs Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems Experience working on any Databricks would be added advantage Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Exposure to various ETL and Business Intelligence tools Experience in shell scripting to automate pipeline execution. Solid grounding in Agile methodologies Experience with git and other source control systems Strong communication and presentation skills Nice-to-have skills Certification in Hadoop/Big Data Hortonworks/Cloudera Databricks Spark certification Unix or Shell scripting Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Qualifications Tech./M.Tech./MS or BCA/MCA degree from a reputed university

Posted 2 weeks ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency. Roles & Responsibilities:Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies. Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches. E2E integration implementation from partner system to o9 platform Technical Skills: Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools. Proficiency in database (SQL Server, Oracle etc ).Knowledge of DDL, DML, stored procedures.Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred. Any API based integration experience will be added advantageProfessional Skills: Proven ability to work creatively and analytically in a problem-solving environment.Proven ability to build, manage and foster a team-oriented environment.Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.Strong collaborator- team player- and individual contributor. Educational QualificationBE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred. Additional Information:The candidate should have minimum 7.5 years of experience in O9 Solutions.This position is based in Pune.A 15 years full time education is required.Open to travel - short / long term Qualification 15 years full time education

Posted 2 weeks ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Job Title Big Data Tester About Us Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer : Role: Support Development, and maintain automated test frameworks, tools, and test cases for Data Engineering and Data Warehouse applications. Collaborate with cross-functional teams, including software developers, data engineers, and data analysts, to ensure comprehensive testing coverage and adherence to quality standards. Conduct thorough testing of data pipelines, ETL processes, and data transformations using Big Data technologies. Apply your knowledge of Data Warehouse/Data Lake methodologies and best practices to validate the accuracy, completeness, and performance of our data storage and retrieval systems. Identify, document, and track software defects, working closely with the development team to ensure timely resolution. Participate in code reviews, design discussions, and quality assurance meetings to provide valuable insights and contribute to the overall improvement of our software products. Base Skill Requirements: Must Technical Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3-5 years of experience in software testing and development, with a focus on data-intensive applications. Proven experience in testing data pipelines and ETL processes - Test planning, Test Environment planning, End to End testing, Performance testing. Solid programming skills in Python - proven automation effort to bring efficiency in the test cycles. Solid understanding of Data models and SQL . Must have experience with ETL (Extract, Transform, Load) processes and tools (Scheduling and Orchestration tools, ETL Design understanding) Good understanding of Big Data technologies like Spark, Hive, and Impala. Understanding of Data Warehouse methodologies, applications, and processes. Experience working in an Agile/Scrum environment, with a solid understanding of user stories, acceptance criteria, and sprint cycles. Optional Technical Experience with scripting languages like Bash or Shell. Experience working with large-scale datasets and distributed data processing frameworks (e.g., Hadoop, Spark). Familiarity with data integration tools like Apache NiFi is a plus. Excellent problem-solving and debugging skills, with a keen eye for detail. Strong communication and collaboration skills to work effectively in a team-oriented environment. Eagerness to learn and contribute to a growing team.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Pune

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : O9 Solutions Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by delivering high-quality applications that enhance operational efficiency. Roles & Responsibilities:Play the integration consultant role on o9 implementation projects. Understand o9 platforms data model (table structures, linkages, pipelines, optimal designs) for designing various planning use cases. Review and analyze the data provided by customer along with its technical/functional intent and inter-dependencies. Participate in the technical design, data requirements gathering, making recommendations in case of inaccurate or missing data. Work on designing and creating batch schedules based on frequency and configuration settings for daily/weekly/quarterly/yearly batches. E2E integration implementation from partner system to o9 platform Technical Skills: Must have minimum 3 to 7 years of experience on SQL, PySpark, Python, Spark SQL and ETL tools. Proficiency in database (SQL Server, Oracle etc ).Knowledge of DDL, DML, stored procedures.Good to have experience in Airflow, Dalta Lake, Nifi, Kafka. At least one E2E integration implementation experience will be preferred. Any API based integration experience will be added advantageProfessional Skills: Proven ability to work creatively and analytically in a problem-solving environment.Proven ability to build, manage and foster a team-oriented environment.Excellent problem-solving skills with excellent communication written/oral, interpersonal skills.Strong collaborator- team player- and individual contributor. Educational QualificationBE/BTech/MCA/Bachelor's degree/masters degree in computer science and related fields of work are preferred. Additional Information:The candidate should have minimum 7.5 years of experience in O9 Solutions.This position is based in Pune.A 15 years full time education is required.Open to travel - short / long term Qualification 15 years full time education

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Pune

Hybrid

Naukri logo

1. Experienced with asynchronous programming, multithreading, implementing APIs, and Microservices, including Spring Boot 2. Proficiency with SQL Required Candidate profile 5+ years of professional experience in Java 8 or higher -Strong expertise in Spring Boot -Solid understanding of microservices architecture Kafka, Messaging/ streaming stack,Junit, Code Optimization,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Naukri logo

We are seeking a highly skilled ETL Architect Powered by AI (Apache NiFi/Kafka) to join our team. The ideal candidate will have expertise in managing, automating, and orchestrating data flows using Apache NiFi. In this role, you will design, implement, and maintain scalable data pipelines that handle real-time and batch data processing. The role also involves integrating NiFi with various data sources, performing data transformation tasks, and ensuring data quality and governance Key Responsibilities: Real-Time Data Integration (Apache NiFi & Kafka): Design, develop, and implement real-time data pipelines leveraging Apache NiFi for seamless data flow. Build and maintain Kafka producers and consumers for effective streaming data management across systems. Ensure the scalability, reliability, and performance of data streaming platforms using NiFi and Kafka. Monitor, troubleshoot, and optimize data flow within Apache NiFi and Kafka clusters. Manage schema evolution and support data serialization formats such as Avro , JSON , and Protobuf . Set up, configure, and optimize Kafka topics, partitions, and brokers for high availability and fault tolerance. Implement backpressure handling, prioritization, and flow control strategies in NiFi data flows. Integrate NiFi flows with external services (e.g., REST APIs , HDFS , RDBMS ) for efficient data movement. Establish and maintain secure data transmission, access controls, and encryption mechanisms in NiFi and Kafka environments. Develop and maintain batch ETL pipelines using tools like Informatica , Talend , and custom Python/SQL scripts . Continuously optimize and refactor existing ETL workflows to improve performance, scalability, and fault tolerance. Implement job scheduling, error handling, and detailed logging mechanisms for data pipelines. Conduct data quality assessments and design frameworks to ensure high-quality data integration. Design and document both high-level and low-level data architectures for real-time and batch processing. Lead technical evaluations of emerging tools and platforms for potential adoption into existing systems. Qualifications we seek in you: Minimum Qualifications / Skills: Bachelors degree in computer science , Information Technology , or a related field. Significant experience in IT with a focus on data architecture and engineering . Proven experience in technical leadership , driving data integration projects and initiatives. Certifications in relevant technologies (e.g., AWS Certified Solutions Architect , Microsoft Certified: Azure Data Engineer ) are a plus. Strong analytical skills and the ability to translate business requirements into effective technical solutions. Proficiency in communicating complex technical concepts to non-technical stakeholders. Preferred Qualifications / Skills: Extensive hands-on experience as a Data Architect . In-depth experience with Apache NiFi , Apache Kafka , and related ecosystem components (e.g., Kafka Streams , Schema Registry ). Ability to develop and optimize NiFi processors to handle various data sources and formats. Proficient in creating reusable NiFi templates for common data flows and transformations. Familiarity with integrating NiFi and Kafka with big data technologies like Hadoop , Spark , and Databricks . At least 2 end-to-end implementations of data integration solutions in a real-world environment. Experience in metadata management frameworks and scalable data ingestion processes. Solid understanding of data platform design patterns and best practices for integrating real-time data systems. Knowledge of ETL processes , data integration tools, and data modeling techniques. Demonstrated experience in Master Data Management (MDM) and data privacy standards . Experience with modern data platforms such as Snowflake , Databricks , and big data tools. Proven ability to troubleshoot complex data issues and implement effective solutions . Strong project management skills with the ability to lead data initiatives from concept to delivery. Familiarity with AI/ML frameworks and their integration with data platforms is a plus. Excellent communication and interpersonal skills , with the ability to collaborate effectively across cross-functional teams . Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values diversity and inclusion, respect and integrity, customer focus, and innovation. For more information, visit www.genpact.com . Follow us on Twitter, Facebook, LinkedIn, and YouTube. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

locationsIndia, Bangalore time typeFull time posted onPosted 12 Days Ago job requisition idJR0273871 Job Details: About The Role : About the Role: Join our innovative and inclusive Logic Technology Development team as a TD AI and Analytics Engineer, where diverse talents come together to push the boundaries of semiconductor technology. You will have the opportunity to work in one of the world's most advanced cleanroom facilities, designing, executing, and analyzing experiments to meet engineering specifications for our cutting-edge processes. This role offers a unique chance to learn and operate a manufacturing line, integrating the many individual steps necessary for the production of complex microprocessors. What We Offer: We are dedicated to creating a collaborative, supportive, and exciting environment where diverse perspectives drive exceptional results. At Intel, you will have the opportunity to transform technology and contribute to a better future by delivering innovative products. Learn more about Intel Corporation's Core Values here. Benefits: We offer a comprehensive benefits package designed to support a healthy and fulfilling life. This includes excellent medical plans, wellness programs, recreational activities, generous time off, discounts on various products and services, and many more creative rewards that make Intel a great place to work. Discover more about our amazing benefits here. About the Logic Technology Development (LTD) TD Intel Foundry AI and Analytics Innovation Organization: Intel Foundry TD's AI and Analytics Innovation office is committed to providing a competitive advantage through End-to-End AI and Analytics Solutions, driving Intel's ambitious IDM2.0 goals. Our team is seeking an engineer with a background in Data Engineering, Software Engineering, or Data Science to support and develop modern AI/ML solutions. Explore what life is like inside Intel here. Key Responsibilities: As an Engineer in the TD AI office, you will collaborate with Intel's factory automation organization and Foundry TD's functional areas to support and develop modern AI/ML solutions. Your primary responsibilities will include. Developing software and data engineering solutions for in-house AI/ML products. Enhancing existing ML platforms and devising MLOps capabilities. Understanding existing data structures in factory automation systems and building data pipelines connecting different systems. Testing and supporting full-stack big data engineering systems. Developing data ingestion pipelines, data access APIs, and services, monitoring and maintaining deployment environments and platforms, creating technical documentation, and collaborating with peers/engineering teams to streamline solution development, validation, and deployment. Managing factory big data interaction with cloud environments, ORACLE, SQL, Python, Software architecture, and MLOps. Interfacing with process and integration functional area analytics teams and customers using advanced automated process control systems. Qualifications: Minimum Qualifications: Master's or PhD degree in Computer Science, Computer Engineering, or a related Science/Engineering discipline. 3+ years of experience in data engineering/software development and knowledge in Spark, NiFi, Hadoop, HBase, S3 object storage, Kubernetes, REST APIs, and services. Intermediate to advanced English proficiency (both verbal and written). Preferred Qualifications: 2+ years in data analytics and machine learning (Python, R, JMP, etc.) and relational databases (SQL). 2+ years in a technical leadership role. 3+ months of working knowledge with CI/CD (Continuous Integration/Continuous Deployment) and proficiency with GitHub and GitHub Actions. Prior interaction with factory automation systems. Application Process :By applying to this posting, your resume and profile will become visible to Intel recruiters, allowing them to consider you for current and future job openings aligned with the skills and positions mentioned above. We are constantly working towards a more connected and intelligent future, and we need your help. Change tomorrow. Start today. Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: As the world's largest chip manufacturer, Intel strives to make every facet of semiconductor manufacturing state-of-the-art -- from semiconductor process development and manufacturing, through yield improvement to packaging, final test and optimization, and world class Supply Chain and facilities support. Employees in theTechnology Development and Manufacturing Groupare part of a worldwide network of design, development, manufacturing, and assembly/test facilities, all focused on utilizing the power of Moores Law to bring smart, connected devices to every person on Earth. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will be eligible for our hybrid work model which allows employees to split their time between working on-site at their assigned Intel site and off-site. *

Posted 3 weeks ago

Apply

1.0 - 4.0 years

1 - 5 Lacs

Mumbai

Work from Office

Naukri logo

Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 3 weeks ago

Apply

3.0 - 6.0 years

9 - 14 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .

Posted 3 weeks ago

Apply

3.0 - 7.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : Looking for a Kafka SME to design and support real-time data ingestion pipelines using Kafka within a Cloudera-based Lakehouse architecture. Key Responsibilities : Design Kafka topics, partitions, schema registry Implement producer-consumer apps using Spark Structured Streaming Set up Kafka Connect, monitoring, and alerts Ensure secure, scalable message delivery Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Deep understanding of Kafka internals and ecosystem Integration with Cloudera and NiFi Schema evolution and serialization (Avro, Parquet) Performance tuning and fault-tolerance Preferred technical and professional experience Good communication skill. India market experience is preferred.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

8 - 12 Lacs

Gurugram, Delhi

Work from Office

Naukri logo

Role Description This is a full-time hybrid role for an Apache Nifi Developer based in Gurugram with some work-from-home options. The Apache Nifi Developer will be responsible for designing, developing, and maintaining data workflows and pipelines. The role includes programming, implementing backend web development solutions, using object-oriented programming (OOP) principles, and collaborating with team members to enhance software solutions. Qualifications Knowledge of Apache Nifi and experience in programming Skills in Back-End Web Development and Software Development Data Pipeline Strong understanding of APACHE NIFI Background in Computer Science Excellent problem-solving and analytical skills Ability to work in a hybrid environment Experience in AI and Blockchain is a plus Bachelor's degree in Computer Science or related field

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies