Home
Jobs

6963 Kafka Jobs - Page 22

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

15.0 - 20.0 years

10 - 15 Lacs

Ahmedabad

Work from Office

Naukri logo

( GenAI, JAVA, AI/ML, AWS,Saas is Must ) 15 years of experience in software engineering, with at least 5 years of experience in a leadership role. Strong technology expertise in Java, Microservices architecture, AWS cloud platform, AI, and the Angular framework. Solid background in building scalable and distributed systems, with expertise in technologies such as Spring boot (Spring (Core, AOP, Transactions, Data, Security), Cassandra, Kubernetes (K8s), Kafka, Docker and others. Experience with security best practices and protocols (e.g., SSL/TLS, OAuth) Hand on experience towards Architecture and Design patterns. Practice industry's leading best guidelines/processes in building enterprise products/components Proven track record of successfully leading and managing high-performing engineering teams. Excellent communication, interpersonal, and leadership skills. Ability to mentor and coach others, helping them develop their technical and leadership skills. Strong problem-solving and analytical skills. Experience with Agile development methodologies. Ability to prioritize effectively and manage multiple tasks simultaneously. Experience in building and scaling software applications. Experience in recruiting and hiring top-tier engineering talent. Ability to work effectively in a cross-functional team environment. Skills: angular,ssl/tls,agile development methodologies,spring boot,kafka,aws cloud platform,docker,leadership,java,kubernetes,ai,microservices architecture,software,cassandra,security best practices,agile methodologies,oauth,kubernetes (k8s),aws

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Appnext offers end-to-end discovery solutions covering all the touchpoints users have with their devices. Thanks to Appnext’s direct partnerships with top OEM brands and carriers, user engagement is achieved from the moment they personalize their device for the first time and throughout their daily mobile journey. Appnext ‘Timeline’, a patented behavioral analytics technology, is uniquely capable of predicting the apps users are likely to need next. This innovative solution means app developers and marketers can seamlessly engage with users directly on their smartphones through personalized, contextual recommendations. Established in 2012 and now with 12 offices globally, Appnext is the fastest-growing and largest independent mobile discovery platform in emerging markets. As a Machine Learning Engineer , you will be in charge of building end-to-end machine learning pipelines that operate at a huge scale, from data investigation, ingestions and model training to deployment, monitoring, and continuous optimization. You will ensure that each pipeline delivers measurable impact through experimentation, high-throughput inference, and seamless integration with business-critical systems. This job combines 70% machine learning engineering and 30% algorithm engineering and data science. We're seeking an Adtech pro who thrives in a team environment, possesses exceptional communication and analytical skills, and can navigate high-pressure demands of delivering results, taking ownership, and leveraging sales opportunities. Responsibilities: Build ML pipelines that train on real big data and perform on a massive scale. Handle a massive responsibility, Advertise on lucrative placement (Samsung appstore, Xiaomi phones, TrueCaller). Train models that will make billions of daily predictions and affect hundreds of millions users. Optimize and discover the best solution algorithm to data problems, from implementing exotic losses to efficient grid search. Validate and test everything. Every step should be measured and chosen via AB testing. Use of observability tools. Own your experiments and your pipelines. Be Frugal. Optimize the business solution at minimal cost. Advocate for AI. Be the voice of data science and machine learning, answering business needs. Build future products involving agentic AI and data science. Affect millions of users every instant and handle massive scale Requirements: MSc in CS/EE/STEM with at least 5 years of proven experience (or BSc with equivalent experience) as a Machine Learning Engineer: strong focus on MLOps, data analytics, software engineering, and applied data science- Must Hyper communicator: Ability to work with minimal supervision and maximal transparency. Must understand requirements rigorously, while frequently giving an efficient honest picture of his/hers work progress and results. Flawless verbal English- Must Strong problem-solving skills, drive projects from concept to production, working incrementally and smart. Ability to own features end-to-end, theory, implementation, and measurement. Articulate data-driven communication is also a must. Deep understanding of machine learning, including the internals of all important ML models and ML methodologies. Strong real experience in Python, and at least one other programming language (C#, C++, Java, Go…). Ability to write efficient, clear, and resilient production-grade code. Flawless in SQL. Strong background in probability and statistics. Experience with tools and ML models Experience with conducting A/B test. Experience with using cloud providers and services (AWS) and python frameworks: TensorFlow/PyTorch, Numpy, Pandas, SKLearn (Airflow, MLflow, Transformers, ONNX, Kafka are a plus). AI/LLMs assistance: Candidates have to hold all skills independently without using AI assist. With that candidates are expected to use AI effectively, safely and transparently. Preferred: Deep Knowledge in ML aspects including ML Theory, Optimization, Deep learning tinkering, RL, Uncertainty quantification, NLP, classical machine learning, performance measurement. Prompt engineering and Agentic workflows experience Web development skills Publication in leading machine learning conferences and/or medium blogs. Show more Show less

Posted 2 days ago

Apply

18.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role: Enterprise Architect Grade: VP Location: Pune / Mumbai/Chennai Experience: 18+ Years Organization: Intellect Design Arena Ltd. www.intellectdesign.com About the Role: We are looking for a senior Enterprise Architect with strong leadership and deep technical expertise to define and evolve the architecture strategy for iGTB , our award-winning transaction banking platform. The ideal candidate will have extensive experience architecting large-scale, cloud-native enterprise applications within the BFSI domain , and will be responsible for driving innovation, ensuring engineering excellence, and aligning architecture with evolving business needs. Mandatory Skills: Cloud-native architecture Microservices-based systems PostgreSQL, Apache Kafka, ActiveMQ Spring Boot / Spring Cloud, Angular Strong exposure to BFSI domain Key Responsibilities: Architectural Strategy & Governance: Define and maintain enterprise architecture standards and principles across iGTB product suites. Set up governance structures to ensure compliance across product lines. Technology Leadership: Stay updated on emerging technologies; assess and recommend adoption to improve scalability, security, and performance. Tooling & Automation: Evaluate and implement tools to improve developer productivity, code quality, and application reliability—including automation across testing, deployment, and monitoring. Architecture Evangelism: Drive adoption of architecture guidelines and tools across engineering teams through mentorship, training, and collaboration. Solution Oversight: Participate in the design of individual modules to ensure technical robustness and adherence to enterprise standards. Performance & Security: Oversee performance benchmarking and security assessments. Engage with third-party labs for certification as needed. Customer Engagement: Represent architecture in pre-sales, CXO-level interactions, and post-production engagements to demonstrate the product's technical superiority. Troubleshooting & Continuous Improvement: Support teams in resolving complex technical issues. Capture learnings and feed them back into architectural best practices. Automation Vision: Lead the end-to-end automation charter for iGTB—across code quality, CI/CD, testing, monitoring, and release management. Profile Requirements: 18+ years of experience in enterprise and solution architecture roles, preferably within BFSI or fintech Proven experience with mission-critical, scalable, and secure systems Strong communication and stakeholder management skills, including CXO interactions Demonstrated leadership in architecting complex enterprise products and managing teams of architects Ability to blend technical depth with business context to drive decisions Passion for innovation, engineering excellence, and architectural rigor Show more Show less

Posted 2 days ago

Apply

3.0 - 5.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Job description Key skills : Java, Spring, Spring Boot, SQL, Microservices, Rest API, DOM,SQL Experience : 3 to 5 years of experience Requirement : Immediate Joinee to 15 days (acceptable only if serving in Notice Period) Roles & Responsibilities: Strong proven hands-on working experience in Java development. Hands-on experience in designing and developing applications using Java EE platforms. Skilled in object-oriented analysis and design using common design patterns. Profound insight into Java and JEE internals including class loading, memory management, and transaction management. Experienced in Spring Boot, microservices, REST APIs, Java collections, multithreading, algorithms, and design patterns. Strong troubleshooting and problem-solving abilities. Excellent knowledge of relational databases, SQL, and ORM technologies such as JPA2 and Hibernate. Extensive experience with the Spring Framework.

Posted 2 days ago

Apply

3.0 - 5.0 years

14 - 22 Lacs

Gurugram, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Position : Software Developer - II Experience : 3-5 years Interested Candidate can Apply through - https://forms.gle/XcEzscPLhxQHczpv8 Role & responsibilities Design, develop, and deploy highly scalable, resilient, and distributed microservices using Java, Spring Boot, and associated technologies. Build and maintain RESTful APIs and integrations between services in a microservices architecture. Implement and optimize application performance, ensuring minimal latency and maximum reliability. Drive architectural decisions and ensure adherence to coding standards, design principles, and performance guidelines. Write clean, testable, and maintainable code with comprehensive unit and integration test coverage Implement effective logging, monitoring, and alerting solutions for applications using tools like Prometheus, Grafana, etc. Participate in code reviews, architecture reviews, and technical discussions to ensure high-quality outcomes. Mentor and guide junior developers, promoting a culture of continuous learning and improvement. Ability to work in an agile, collaborative environment and dealing with ambiguity. Basic Qualifications : Proficient in Java 3+ and hands-on experience with Spring Boot. Strong expertise in Hibernate or JPA for ORM (Object Relational Mapping). Solid understanding of microservices architecture and design patterns. Experience with containerization and orchestration tools like Docker and Kubernetes. Proficient in database systems such as PostgreSQL, MySQL, or MongoDB and search engines like elasticsearch. Familiarity with caching solutions like Redis, Memcached, etc. Experience with CI/CD pipelines and tools like Jenkins, GitHub Actions, or GitLab CI/CD. Working knowledge of cloud platforms like AWS, Azure, or Google Cloud services. Familiarity with messaging systems like Kafka, or ActiveMQ. Excellent problem-solving skills. Interested Candidate can Apply through - https://forms.gle/XcEzscPLhxQHczpv8

Posted 2 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

💼 Position: Chief Technology Officer (CTO) 📌 About the Role We are seeking a visionary Chief Technology Officer (CTO) to drive our technology strategy and execution across the organization. You will be at the forefront of innovation, overseeing the development of high-performance AdTech platforms and leading multidisciplinary teams in engineering, data science, and product development. Your leadership will directly influence our growth, scalability, and competitive edge. 🛠️ Key Responsibilities Develop and implement a forward-looking technology roadmap aligned with the company’s strategic objectives. Lead, mentor, and scale cross-functional teams across engineering, product, and data. Oversee the architecture and delivery of AdTech platforms, including RTB, DSP, and SSP systems. Champion innovation in AI/ML, big data processing, and real-time analytics to optimize ad performance. Evaluate and integrate emerging technologies to improve system performance, scalability, and resilience. Uphold best practices in DevOps, data security, and privacy compliance. Collaborate closely with leadership, product, and commercial teams to deliver impactful, tech-driven solutions. Represent the company in technical partnerships, industry forums, and strategic alliances. ✅ What We’re Looking For 10+ years in software engineering, including 5+ years in a senior leadership or CTO capacity. Demonstrated expertise in AdTech ecosystems (e.g., RTB, DSP, SSP, header bidding, OpenRTB). Deep technical knowledge of AI/ML systems, cloud platforms (AWS/GCP), and big data stacks (Kafka, Spark, Hadoop). Proven experience building scalable backend infrastructures and real-time data systems. Strong background in agile development, CI/CD, API design, and system architecture. Excellent leadership, communication, and cross-functional collaboration skills. 🎓 Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field. MBA or Ph.D. is a plus but not required. 💡 Why Join Us? Be part of a company building next-generation AdTech solutions with a global footprint. Lead and inspire a talented, passionate, and collaborative tech team. Enjoy a competitive compensation package including salary and equity options. Influence industry-shaping products and work in a dynamic, innovation-driven culture. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hār, Himachal Pradesh, India

On-site

Linkedin logo

Role : Senior Software Engineer - Full Stack Location : Gurgaon / Hybrid Skills : React, Angular, JavaScript , TypeScript, .Net , C# , Kotlin Senior Software Engineer - Full Stack (Gurugram Based , Backend Heavy) Shift Timings - General Yrs of experience :- 7+ yrs Joining: Immediate joiners Location: Gurgaon / Hybrid The Opportunity We are looking for key contributors to our industry-leading front-end websites. You'll be working on products which have evolved tremendously over the past several years to become the global market leader. You'll be using the most current technologies and best practices to accomplish our goals. A typical day involves: Creating new end-to-end systems Building advanced architectures Adding new features to high-uptime, frequently published websites and apps Developing fast and reliable automated testing systems Working in a culture that continually seeks to improve quality, tools, and efficiency What You'll Need To Succeed (Must) 7+ years of experience developing web applications in client-side frameworks like React or Angular Strong understanding of object-oriented JavaScript, TypeScript Hands-on experience in .Net, C#, Kotlin, or Java (Backend) B.S. in Computer Science or quantitative field; M.S. preferred Familiarity with agile methodologies, analytics, A/B testing, feature flags, Continuous Delivery, Trunk-based Development Excellent HTML/CSS skills – you know how to make data both functional and visually appealing Hands-on experience with CI/CD solutions like GitLab Passion for new technologies and best tools available Strong communication and coordination skills Excellent analytical thinking and problem-solving ability Proficiency in English It’s Great If You Have Experience designing physical architecture at scale, including resilient and highly available systems Knowledge of: NoSQL technologies: Cassandra, Scylla DB, Elasticsearch, Redis, DynamoDB, etc. Queueing systems: Kafka, RabbitMQ, SQS, Azure Service Bus, etc. Experience with Containers, Docker, and ideally Kubernetes (K8s) CI/CD expertise (additional tools beyond GitLab are a plus) Proficiency in modern coding and design practices (Clean Code, SOLID principles, TDD) Experience working on high-traffic applications with large user bases Background in data-driven environments with Big Data analysis Led teams or greenfield projects solving complex system challenges Experience with global projects serving international markets and distributed data centre’s with localized UIs and data Show more Show less

Posted 2 days ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

Positions: 1. Golang Developer/software engineer 2. Team Lead Golang Role & responsibilities As a Go/Golang Engineer/Team Lead you will be focusing on building and maintaining backend systems, APIs, and microservices using the Go programming language. Key responsibilities include designing and implementing scalable and performant solutions, collaborating with cross-functional teams, and ensuring code quality through testing and reviews. Skills Required - Golang - Kafka - REST API - Agile environment - Relational databases (PostgreSQL) - NoSQL databases (Couchbase, Cassandra) - Continuous integration tools (Jenkins, Gitlab CI) - Automated build and test frameworks - Containerization (Docker) - Container orchestration (Kubernetes) - Atlassian tools (JIRA, Confluence)

Posted 2 days ago

Apply

6.0 - 10.0 years

16 - 22 Lacs

Hyderabad, Pune, Chennai

Work from Office

Naukri logo

Full stack developer - Java/Angular/Springboot / Kotlin / Kafka We are seeking a talented Full Stack Developer experienced in Java, Kotlin, Spring Boot, Angular, and Apache Kafka to join our dynamic engineering team. The ideal candidate will design, develop, and maintain end-to-end web applications and real-time data processing solutions, leveraging modern frameworks and event-driven architectures. Key Responsibilities Design, develop, and maintain scalable web applications using Java, Kotlin, Spring Boot, and Angular. Build and integrate RESTful APIs and microservices to connect frontend and backend components. Develop and maintain real-time data pipelines and event-driven features using Apache Kafka. Collaborate with cross-functional teams (UI/UX, QA, DevOps, Product) to define, design, and deliver new features. Write clean, efficient, and well-documented code following industry best practices and coding standards. Participate in code reviews, provide constructive feedback, and ensure code quality and consistency. Troubleshoot and resolve application issues, bugs, and performance bottlenecks in a timely manner. Optimize applications for maximum speed, scalability, and security. Stay updated with the latest industry trends, tools, and technologies, and proactively suggest improvements. Participate in Agile/Scrum ceremonies and contribute to continuous integration and delivery pipelines. Required Qualifications Experience with cloud-based technologies and deployment (Azure, GCP). Familiarity with containerization (Docker, Kubernetes) and microservices architecture. Proven experience as a Full Stack Developer with hands-on expertise in Java, Kotlin, Spring Boot, and Angular (Angular 2+). Strong understanding of object-oriented and functional programming principles. Experience designing and implementing RESTful APIs and integrating them with frontend applications. Proficiency in building event-driven and streaming applications using Apache Kafka. Experience with database systems (SQL/NoSQL), ORM frameworks (e.g., Hibernate, JPA), and SQL. Familiarity with version control systems (Git) and CI/CD pipelines. Good understanding of HTML5, CSS3, JavaScript, and TypeScript. Experience with Agile development methodologies and working collaboratively in a team environment. Excellent problem-solving, analytical, and communication skills.

Posted 2 days ago

Apply

2.0 - 5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Must-Have Skills & Traits Core Engineering Advanced Python skills with a strong grasp of clean, modular, and maintainable code practices Experience building production-ready backend services using frameworks like FastAPI, Flask, or Django Strong understanding of software architecture, including RESTful API design, modularity, testing, and versioning. Experience working with databases (SQL/NoSQL), caching layers, and background job queues. AI/ML & GenAI Expertise Hands-on experience with machine learning workflows: data preprocessing, model training, evaluation, and deployment Practical experience with LLMs and GenAI tools such as OpenAI APIs, Hugging Face, LangChain, or Transformers Understanding of how to integrate LLMs into applications through prompt engineering, retrieval-augmented generation (RAG), and vector search Comfortable working with unstructured data (text, images) in real-world product environments Bonus: experience with model fine-tuning, evaluation metrics, or vector databases like FAISS, Pinecone, or Weaviate Ownership & Execution Demonstrated ability to take full ownership of features or modules from architecture to delivery Able to work independently in ambiguous situations and drive solutions with minimal guidance Experience collaborating cross-functionally with designers, PMs, and other engineers to deliver user-focused solutions Strong debugging, systems thinking, and decision-making skills with an eye toward scalability and performance Nice-to-Have Skills Experience in startup or fast-paced product environments. 2-5 years of relevant experience. Familiarity with asynchronous programming patterns in Python. Exposure to event-driven architecture and tools such as Kafka, RabbitMQ, or AWS EventBridge Data science exposure: exploratory data analysis (EDA), statistical modeling, or experimentation Built or contributed to agentic systems, ML/AI pipelines, or intelligent automation tools Understanding of MLOps: model deployment, monitoring, drift detection, or retraining pipelines Frontend familiarity (React, Tailwind) for prototyping or contributing to full-stack features Show more Show less

Posted 2 days ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Naukri logo

Role & responsibilities Majorly looking for (Java + Kafka ) or (Java + Kafka NOSQL Database) Technical Skills Required: Java: Strong understanding of OOP, multithreading, data structures, and design patterns. Kafka: Proficient in designing and building Kafka producers, consumers, and stream processing. Couchbase: Experience with data modeling, indexing, N1QL queries, and XDCR. Spring Boot, REST APIs: Strong experience in developing microServices and APIs. Familiarity with CI/CD tools and containerization (e.g., Docker, Kubernetes). Experience in performance tuning and monitoring of distributed systems. Version control using Git.

Posted 2 days ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide effective solutions, ensuring that the data infrastructure is robust and scalable to meet the demands of the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to enhance efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka.- Strong understanding of data warehousing concepts and architecture.- Familiarity with cloud platforms such as AWS or Azure.- Experience in SQL and NoSQL databases for data storage and retrieval. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Chennai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

15.0 - 20.0 years

17 - 22 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand their data needs and provide innovative solutions to enhance data accessibility and usability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to ensure efficiency and effectiveness. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark.- Good To Have Skills: Experience with Apache Kafka, Apache Airflow, and cloud platforms such as AWS or Azure.- Strong understanding of data modeling and database design principles.- Experience with SQL and NoSQL databases for data storage and retrieval.- Familiarity with data warehousing concepts and tools. Additional Information:- The candidate should have minimum 5 years of experience in PySpark.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 days ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Navi Mumbai

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Navi Mumbai

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 days ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Gurugram

Work from Office

Naukri logo

Responsible for Develop processes to proactively monitor and alert for critical metrics. DSL Query writing experience & Development of Trend analysis graphs (Kibana dashboards) for critical events based on event correlation. Responsible for Implement and manage Logstash Pipelines. Responsible for Index management for better optimum efficiency. Index management for better optimum efficiency Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Education Qualification - BE/Btech/MCA/M.Tech, 4-5 yrs Experience in providing solutions using Elastic Stack Experience in Administering Production systems where Elastic stack runs. Experience in end-to-end low-level design, development and delivery of ELK based reporting solutions Preferred technical and professional experience Understand business requirements and create appropriate indexes documents. Index management for better optimum efficiency. Must be Proficient in elastic query for data analysis

Posted 2 days ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

The shift toward the consumption of IT as a service, i.e., the cloud, is one of the most important changes to happen to our industry in decades. At IBM, we are driven to shift our technology to an as-a-service model and to help our clients transform themselves to take full advantage of the cloud. With industry leadership in analytics, security, commerce, and cognitive computing and with unmatched hardware and software design and industrial research capabilities, no other company is as well positioned to address the full opportunity of cloud computing. We're looking for experienced cloud software engineers to join our App Dev services development team in India, Bangalore. We seek individuals who innovate & share our passion for winning in the cloud marketplace. You will be part of a strong, agile, and culture-driven engineering team responsible for enabling IBM Cloud to move quickly. We are running IBM's next generation cloud platform to deliver performance and predictability for our customers' most demanding workloads, at global scale and with leadership efficiency, resiliency and security. It is an exciting time, and as a team we are driven by this incredible opportunity to thrill our clients. Responsibilities Design and develop innovative, company and industry impacting services using open source and commercial technologies at scale Designing and architecting enterprise solutions to complex problems Presenting technical solutions and designs to engineering team Adhere to compliance requirements and secure engineering best practices Collaboration and review of technical designs with architecture and offering management Taking ownership and keen involvement in projects that vary in size and scope depending on requirements Writing and executing unit, functional, and integration test cases Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Demonstrated analytical skills and data structures/algorithms fundamentals Demonstrated verbal and written communications skills Demonstrated skills with troubleshooting, debugging, maintaining and improving existing software 4+ years overall experience in Development or Engineering experience. 2+ years of experience on Cloud architecture and developing Cloud native applications on Cloud 3+ years of experience with Golang or related programming language 3+ years of experience with React and Node or related programming language 3+ years of Experience developing REST API using Golang and and/or Python 3+ Experience with RESTful API design, Micro-services, ORM concepts, 2+ years of Experience with Docker and Kubernetes 2+ years of experience with UI e2e tools and experience with Accessibility. Experience working with any version control system (Git preferred) Preferred technical and professional experience Experience with Message Queues (Kafka and RabbitMQ Preferred) Experience with Relational Databases (Postgres preferred) Experience with Redis Caching Experience with HTML, Javascript, React and Node Experience developing test automation Experience with CI/CD pipelines

Posted 2 days ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Analyzes and designs software modules, features or components of software programs and develops related specifications. Develops, tests, documents and maintains complex software programs for assigned systems, applications and/or products. Gathers and evaluates software project requirements and apprises appropriate individual(s). Codes, tests and debugs new software or enhances existing software. Troubleshoots and resolves or recommends solutions to complex software problems. Provides senior level support and mentoring by evaluating product enhancements for feasibility studies and providing completion time estimates. Assists management with the planning, scheduling, and assigning of projects to software development personnel. Ensures product quality by participating in design reviews, code reviews, and other mechanisms. Participates in developing test procedures for system quality and performance. Writes and maintains technical documentation for assigned software projects. Provides initial input on new or modified product/application system features or enhancements for user documentation. Reviews user documentation for technical accuracy and completeness. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Over 6+ years of experience in software development tools and methods; related software languages; test design and configuration; related systems, applications, products and services. 5+ years of experience in enterprise applications using Java, J2EE and related technologies- Spring, Hibernate, Kafka, SQL, REST APIs, Microservices, JSP, etc. Familiarity with cloud computing services such as AWS, Azure, GCP. Hands-on experience with Oracle Databases or similar. Knowledge of scripting languages like Python and Perl. Experience in design and development of UI. Knowledge of different flavours of .JS (React, Angular, Node etc.) Ability to test and analyze data and provide recommendations, to organize tasks and determine priorities, ability to provide guidance to less experienced personnel. Preferred technical and professional experience Passion for mobile device technologies. Proven debugging and troubleshooting skills (memory, performance, battery usage, network usage optimization, etc.).

Posted 2 days ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Gurugram

Work from Office

Naukri logo

A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics

Posted 2 days ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Gurugram

Work from Office

Naukri logo

A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics

Posted 2 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 days ago

Apply

Exploring Kafka Jobs in India

Kafka, a popular distributed streaming platform, has gained significant traction in the tech industry in recent years. Job opportunities for Kafka professionals in India have been on the rise, with many companies looking to leverage Kafka for real-time data processing and analytics. If you are a job seeker interested in Kafka roles, here is a comprehensive guide to help you navigate the job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Gurgaon

These cities are known for their thriving tech industries and have a high demand for Kafka professionals.

Average Salary Range

The average salary range for Kafka professionals in India varies based on experience levels. Entry-level positions may start at around INR 6-8 lakhs per annum, while experienced professionals can earn between INR 12-20 lakhs per annum.

Career Path

Career progression in Kafka typically follows a path from Junior Developer to Senior Developer, and then to a Tech Lead role. As you gain more experience and expertise in Kafka, you may also explore roles such as Kafka Architect or Kafka Consultant.

Related Skills

In addition to Kafka expertise, employers often look for professionals with skills in: - Apache Spark - Apache Flink - Hadoop - Java/Scala programming - Data engineering and data architecture

Interview Questions

  • What is Apache Kafka and how does it differ from other messaging systems? (basic)
  • Explain the role of Zookeeper in Apache Kafka. (medium)
  • How does Kafka guarantee fault tolerance? (medium)
  • What are the key components of a Kafka cluster? (basic)
  • Describe the process of message publishing and consuming in Kafka. (medium)
  • How can you achieve exactly-once message processing in Kafka? (advanced)
  • What is the role of Kafka Connect in Kafka ecosystem? (medium)
  • Explain the concept of partitions in Kafka. (basic)
  • How does Kafka handle consumer offsets? (medium)
  • What is the role of a Kafka Producer API? (basic)
  • How does Kafka ensure high availability and durability of data? (medium)
  • Explain the concept of consumer groups in Kafka. (basic)
  • How can you monitor Kafka performance and throughput? (medium)
  • What is the purpose of Kafka Streams API? (medium)
  • Describe the use cases where Kafka is not a suitable solution. (advanced)
  • How does Kafka handle data retention and cleanup policies? (medium)
  • Explain the Kafka message delivery semantics. (medium)
  • What are the different security features available in Kafka? (medium)
  • How can you optimize Kafka for high throughput and low latency? (advanced)
  • Describe the role of a Kafka Broker in a Kafka cluster. (basic)
  • How does Kafka handle data replication across brokers? (medium)
  • Explain the significance of serialization and deserialization in Kafka. (basic)
  • What are the common challenges faced while working with Kafka? (medium)
  • How can you scale Kafka to handle increased data loads? (advanced)

Closing Remark

As you explore Kafka job opportunities in India, remember to showcase your expertise in Kafka and related skills during interviews. Prepare thoroughly, demonstrate your knowledge confidently, and stay updated with the latest trends in Kafka to excel in your career as a Kafka professional. Good luck with your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies