Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 6.0 years
3 - 7 Lacs
Kolkata, Mumbai, Bengaluru
Work from Office
Key Responsibilities: Design, develop, and maintain robust applications using Java and Spring Boot. Implement microservices architecture to enhance scalability and performance. Collaborate with cross-functional teams to define, design, and ship new features. Ensure code quality through unit testing and code reviews. Participate in agile development processes and contribute to sprint planning and retrospectives. Troubleshoot and debug applications to optimize performance. Document technical specifications and user guides. Mandatory Skills: Java: Strong proficiency in Java programming. Spring Boot: Hands-on experience (2+ years) with Spring Boot framework. Microservices: Practical experience in designing and implementing microservices. GitHub: Proficient in using GitHub for version control and collaboration. REST Principles: Good understanding of RESTful services and API design. Jira: Familiarity with Jira for project management and issue tracking. Git: Strong understanding of Git for version control. IntelliJ: Experience using IntelliJ IDEA as a development environment. Kafka: Experience with Apache Kafka for real-time data streaming. Locations : Mumbai, Delhi NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote Keywords Maven,spring boot,GitHub,REST principles,Jira,Git,Intellij,Apache Kafka,Java*,SpringBoot*,Microservices*,Kafka*
Posted 3 months ago
6.0 - 8.0 years
15 - 18 Lacs
Bhubaneswar, Hyderabad, Bengaluru
Work from Office
Client is looking for a strong Java candidate with the following skills. Spring Webflux and streaming knowledge is a must and key thing that they are looking for. Here is the overall JD for the position. RDBMS No At least 1 year Is Required CI/CD 2-5 Years Is Required Cloud Computing 2-5 Years Is Required Core Java 5-10 Years Is Required Kubernetes 2-5 Years Is Required microservices - 2-5 Years Is Required MongoDB - At least 1 year Nice To Have NoSQL -At least 1 year Nice To Have python At least 1 year Is Required Spring Boot - 5-10 Years Is Required Spring Data - 2-5 Years Is Required Spring Security - 2-5 Years Is Required Spring Webflux At least 1 year Is Required Stream processing At least 1 year Is Required Java 17 - 2-5 Years Is Required Apache Kafka - At least 1 year Is Required Apache SOLR At least 1 year Is Required Expertise with solution design and enterprise large scale applications development In-depth knowledge of integration patterns, integration technologies and integration platforms Experience with Queuing related technologies like Kafka Good hands-on experience to design and build cloud ready application Good programming skills in Java, Python etc Proficiency of dev/build tools: git, maven, gradle Experience with the modern NoSQL/Graph DB/Data Streaming technologies is a plus Good understanding of Agile software development methodology Location-hyd,mangalore.bubaneshawr,trivendrum
Posted 3 months ago
10 - 20 years
20 - 35 Lacs
Gurugram, Bengaluru
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Gurgaon/ Bengaluru Payroll: BCforward Work Mode: Hybrid JD Skills: Java; Apache Kafka; AWS; microservices, Event Driven Architecture Experienced Java engineer with over 10 years of experience with expertise in microservices , event driven architecture Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 15-Days joiners at most. All the best
Posted 4 months ago
4 - 9 years
3 - 8 Lacs
Thane, Navi Mumbai, Mumbai (All Areas)
Work from Office
J2EE using MVC framework (JSP/Servlet/Webservices/JSF/Struts/Spring ), knowledge Java, JSP, servlet, Oracle, MySQL, Apache Solr, Apache kafka, Bootstrap, Hibernate/JDBC, JBoss, Apache, Junit, jquery, javascript, Java Webservices, SOAP, REST API, JSON Required Candidate profile Multithreading, Linux, Jboss, microservices, Agile methodology, GITHUB & SVN, MySQL, Oracle, MVC framework, application & web servers, Data Structure, Basic Networking, performing troubleshooting
Posted 4 months ago
7 - 12 years
11 - 14 Lacs
Chennai
Remote
Job Summary: We are seeking a highly skilled and experienced Microservices Developer with strong expertise in Java Spring Boot, AWS ROSA (Red Hat OpenShift Service on AWS), and event-driven architecture. The ideal candidate will be responsible for developing scalable and secure microservices, integrating enterprise systems including Microsoft Dynamics CRM, and ensuring high code quality and security standards. Key Responsibilities: Design, develop, and deploy Java Spring Boot-based microservices on AWS ROSA platform Leverage Kong API Gateway and KPI Management Platform for API governance, observability, and traffic control Create and manage API proxies, plugins, and integrations in Kong for secure and monitored API access Implement event-driven architectures using Apache Kafka Write unit tests using JUnit and ensure code quality with SonarQube Perform secure coding and address vulnerabilities using Veracode (SAST & DAST) Integrate with Microsoft Dynamics CRM and other internal enterprise systems Work with AWS CI/CD pipelines, automate build and deployment workflows Apply integration design patterns for robust and maintainable system interfaces Participate in Agile/Scrum ceremonies, collaborate in a cross-functional team Maintain technical documentation including API specs, architecture diagrams, and integration flows Troubleshoot and resolve technical issues across environments Participate in code reviews and support mentoring of junior developers Required Skills and Qualifications: 7+ years of experience in Java Spring Boot development Proven experience deploying microservices on AWS ROSA (Red Hat OpenShift on AWS) Hands-on experience with Kong API Gateway, including API proxy setup, plugins, routing, and KPI monitoring Strong understanding of Kafka, event-driven design, and asynchronous communication patterns Hands-on experience with JUnit, SonarQube, and Veracode (static & dynamic analysis) Experience working with AWS services: EC2, EKS, S3, Lambda, RDS, CloudWatch, etc. Proficient in creating and maintaining CI/CD pipelines using AWS CodePipeline or similar tools Demonstrated ability to integrate with Microsoft Dynamics CRM and enterprise systems Knowledge of integration patterns, API design (REST/JSON), and OAuth2/SAML authentication Strong background in Agile/Scrum delivery models Excellent verbal and written communication skills Ability to work independently in remote environments while aligning with Singapore time zone Preferred Qualifications: Experience with Kong Enterprise features such as Dev Portal, Analytics, or Service Hub Experience with Red Hat OpenShift CLI and deployment automation Familiarity with Docker, Helm, and Terraform Exposure to DevSecOps pipelines Certification in AWS (Developer Associate or Architect) or Red Hat OpenShift is a plus Work Conditions: Remote work setup Must be available during Singapore working hours (GMT+8) Laptop and VPN access will be provided Collaborative team culture with frequent standups and technical reviews
Posted 4 months ago
8 - 10 years
27 - 32 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
The Team: As a Senior Lead Machine Learning Engineer of the Document Platforms and AI Team, you will play a critical role in building the next generation of data extraction tools, working on cutting-edge ML-powered products and capabilities that power natural language understanding, information retrieval, and data sourcing solutions for the Enterprise Data Organization and our clients. This is an exciting opportunity to shape the future of data transformation and see your work make a real difference, all while having fun in a collaborative and engaging environment. You'll spearhead the development and deployment of production-ready AI products and pipelines, leading by example and mentoring a talented team. This role demands a deep understanding of machine learning principles, hands-on experience with relevant technologies, and the ability to inspire and guide others. You'll be at the forefront of a rapidly evolving field, learning and growing alongside some of the brightest minds in the industry. If you're passionate about AI, driven to make an impact, and thrive in a dynamic and supportive workplace, we encourage you to join us! The Impact: The Document Platforms and AI team has already delivered breakthrough products and significant business value over the last 3 years. In this role you will be developing our next generation of new products while enhancing existing ones aiming at solving high-impact business problems. Whats in it for you: Be a part of a global company and build solutions at enterprise scale Collaborate with a highly skilled and technically strong team Contribute to solving high complexity, high impact problems Responsibilities: Build production ready data acquisition and transformation pipelines from ideation to deployment. Being a hands-on problem solver and developer helping to extend and manage the data platforms. Apply best practices in data modeling and building ETL pipelines (streaming and batch) using cloud-native solutions Technical leadership: Drive the technical vision and architecture for the extraction project, making key decisions about model selection, infrastructure, and deployment strategies. Model development: Design, develop, and evaluate state-of-the-art machine learning models for information extraction, leveraging techniques from NLP, computer vision (if applicable), and other relevant domains. Data preprocessing and feature engineering: Develop robust pipelines for data cleaning, preprocessing, and feature engineering to prepare data for model training. Model training and evaluation: Train, tune, and evaluate machine learning models, ensuring high accuracy, efficiency, and scalability. Deployment and monitoring: Deploy and maintain machine learning models in a production environment, monitoring their performance and ensuring their reliability. Research and innovation: Stay up-to-date with the latest advancements in machine learning and NLP, and explore new techniques and technologies to improve the extraction process. Collaboration: Work closely with product managers, data scientists, and other engineers to understand project requirements and deliver effective solutions. Code quality and best practices: Ensure high code quality and adherence to best practices for software development. Communication: Effectively communicate technical concepts and project updates to both technical and non-technical audiences. What Were Looking For: 8-10 years of professional software work experience, with a strong focus on Machine Learning, Natural Language Processing (NLP) for information extraction and MLOps Expertise in Python and related NLP libraries (e.g., spaCy, NLTK, Transformers, Hugging Face) Experience with Apache Spark or other distributed computing frameworks for large-scale data processing. AWS/GCP Cloud expertise, particularly in deploying and scaling ML pipelines for NLP tasks. Solid understanding of the Machine Learning model lifecycle, including data preprocessing, feature engineering, model training, evaluation, deployment, and monitoring, specifically for information extraction models . Experience with CI/CD pipelines for ML models, including automated testing and deployment. Docker & Kubernetes experience for containerization and orchestration. OOP Design patterns, Test-Driven Development and Enterprise System design SQL (any variant, bonus if this is a big data variant) Linux OS (e.g. bash toolset and other utilities) Version control system experience with Git, GitHub, or Azure DevOps. Excellent Problem-solving, Code Review and Debugging skills Software craftsmanship, adherence to Agile principles and taking pride in writing good code Techniques to communicate change to non-technical people Nice to have Core Java 17+, preferably Java 21+, and associated toolchain Apache Avro Apache Kafka Other JVM based languages - e.g. Kotlin, Scala C# - in particular .NET Core
Posted 4 months ago
2 - 3 years
4 - 5 Lacs
Bengaluru
Work from Office
We're looking for engineers who love to create elegant, easy-to-use interfaces, and enjoy new JavaScript technologies as they show up every day. Particularly ReactJS. You will help drive our technology selection, and will coach your team on how to use these new technologies effectively in a production platform development environment. We need our engineers to be versatile, display leadership qualities and be enthusiastic to tackle new problems across the full-stack as we continue to push our technology forward. Responsibilities Design, develop, test, deploy, maintain and improve software Manage individual project priorities, deadlines and deliverables Keep software components loosely coupled as we grow Contribute improvements to our continuous delivery infrastructure Participate in recruiting and mentoring of top engineering talent Drive roadmap execution and enhance customer feedback into the product Develop, collaborate on, and execute Agile development, product scenarios, in order to release high quality software on a regular cadence Proactively assist your team to find and solve development and production software issues through effective collaboration Work with company stakeholders including PM, PO, Customer Facing teams, DevOps, Support to communicate and collaborate on execution Desirable - Contribute to frameworks selection, microservice extraction, and deployment in On-Premise and SAAS scenarios. Experience with troubleshooting, profiling and debugging applications Familiarity with web debugging tools (Chrome development tools, Fiddler etc) is a plus Experience with different databases (ElasticSearch, Impala, HDFS, Mongo etc) is a plus Basic Git command knowledge is a plus Messaging systems (e.g. RabbitMQ, Apache Kafka, Active MQ, AWS SQS, Azure Service Bus, Google Pub/Sub) Cloud solutions (e.g. AWS, Google Cloud Platform, Microsoft Azure) Personal Skills - Strong written and verbal communications skills to collaborate developers, testers, product owners, scrum masters, directors, and executives Experience taking part in the decision-making process in application code design, solution development, code review Strong worth ethic and emotional intelligence including being on time for meetings Ability to work in fast-changing environment and embrace change while still following a greater plan Qualifications Requirements - BS or MS degree in Computer Science or a related field, or equivalent job experience 2-3 years of experience in web application and any experience on building web IDEs and ETL driven web apps Strong knowledge and experience in C#(2+ years) Experience with ReactJS, microservices (2+ years) Experience in CI/CD pipeline Experience with relational databases, hands-on experience with SQL queries Strong experience with several JavaScript frameworks and tools, such as React, Node Strong knowledge in REST APIs Experience with Atlassian suite products such as JIRA, Bitbucket, Confluence Strong knowledge in Computer Science, Computing Theory: Paradigm & Principles (OOP, SOLID) Database theory (RDBMS) Code testing practices Algorithms Data structures Design Patterns Understanding of network interactions: Protocols conventions (e.g. REST, RPC) Authentication and authorization flows, standards and practices (e.g. oAuth, JWT)
Posted 4 months ago
6.0 - 10.0 years
8 - 18 Lacs
kolkata
Work from Office
Lead team of Blockchain Developers, write & review high-quality codes. Design & build Blockchain framework, accelerators & assets. Design & deploy smart contracts on Ethereum & Layer 2 sidechains. Collaborate on decentralized finance projects & TDD. Required Candidate profile B.Tech/ MCA 6+ Yrs exp in Solidity smart contract prog. Hands-on blockchain APIs, Ethereum standards (ERC-20, ERC-721) & De-Fi projects, Docker, Kubernetes, Node.js, Open-source tools, React/ Angular.
Posted Date not available
2.0 - 5.0 years
4 - 8 Lacs
surat
Work from Office
Responsibilities : - Hands-on development in Golang to deliver trustworthy and smooth functionalities to our users - Monitor, debug, and fix issues in production at high velocity based on user impact - Maintain good code coverage for all new development, with well-written and testable code - Write and maintain clean documentation for software services - Integrate software components into a fully functional software system - Comply with project plans with a sharp focus on delivery timelines Requirement : - Bachelor's degree in computer science, information technology, or a similar field - Must have 3+ years of experience in developing highly scalable, performant web applications - Strong problem-solving skills and experience in application debugging - Hands-on experience of Restful services development using Golang - Hands-on working experience with database; SQL (PostgreSQL / MySQL) - Working experience of message streaming/queuing systems like Apache Kafka, RabbitMQ, - Cloud experience with Amazon Web Services (AWS) - Experience with Serverless Architectures (AWS) would be a plus - Hands-on experience with API / Echo framework
Posted Date not available
5.0 - 8.0 years
30 - 45 Lacs
bengaluru
Work from Office
We are seeking a Senior Software Engineer with strong expertise in Java, Apache Kafka, and Angular to design, develop, and maintain high-performance, scalable enterprise applications. The ideal candidate will have hands-on experience in distributed systems, event-driven architectures, and building rich front-end applications. Key Responsibilities : - Design, develop, and maintain backend services using Java (Java 8+ / Spring Boot). Develop and maintain real-time data streaming solutions using Apache Kafka. Create intuitive, responsive, and dynamic UI components using Angular (Angular 10+). Collaborate with architects, business analysts, and other engineers to define technical solutions. Implement RESTful APIs and integrate them with front-end and external systems. Optimize application performance, scalability, and reliability. Write clean, maintainable, and well-tested code. Participate in code reviews and mentor junior developers. Work in an Agile/Scrum environment, participating in sprint planning, stand-ups, and retrospectives. Troubleshoot production issues and ensure timely resolution. Required Skills & Qualifications : - Bachelors degree in Computer Science, Engineering, or related field. 5-8+ years of professional software development experience. Strong proficiency in Java (Java 8 or above) and Spring Boot. Hands-on experience with Apache Kafka for real-time event streaming. Proficiency in Angular (preferably Angular 10+) with TypeScript, HTML5, CSS3. Experience building RESTful APIs and integrating microservices. Solid understanding of data structures, algorithms, and design patterns. Experience with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra). Familiarity with CI/CD pipelines (Jenkins, GitLab, or similar).
Posted Date not available
4.0 - 8.0 years
4 - 9 Lacs
hyderabad
Remote
Job Title: Data Engineer GenAI Applications Company: Amzur Technologies Location: Hyderabad / Visakhapatnam / Remote (India) Experience: 48 Years Notice Period: Immediate to 15 Days (Preferred) Employment Type: Full-Time Position Overview We are looking for a skilled and passionate Data Engineer to join our GenAI Applications team. This role offers the opportunity to work at the intersection of traditional data engineering and cutting-edge AI/ML systems, helping us build scalable, cloud-native data infrastructure to support innovative Generative AI solutions. What We’re Looking For – Required Skills & Experience 4–8 years of experience in data engineering or related fields. Strong programming skills in Python and SQL , with experience in large-scale data processing. Proficient with cloud platforms (AWS, Azure, GCP) and native data services. Experience with open-source tools such as Apache NiFi, MLflow, or similar platforms. Hands-on experience with Apache Spark , Kafka , and Airflow . Skilled in working with both SQL and NoSQL databases , including performance tuning. Familiarity with modern data warehouses : Snowflake, Redshift, or BigQuery. Proven experience building scalable pipelines for batch and real-time processing. Experience implementing CI/CD pipelines and performance optimization in data workflows. Key Responsibilities Data Pipeline Development Design and optimize robust, scalable data pipelines for AI/ML model training and inference Enable batch and real-time data processing using big data technologies Collaborate with GenAI engineers to understand and meet data requirements Cloud Infrastructure & Tools Build and manage cloud-native data infrastructure using AWS, Azure, or Google Cloud Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation Ensure data reliability through monitoring and alerting system Preferred Skills Understanding of machine learning workflows and MLOps practices Familiarity with Generative AI concepts such as LLMs, RAG systems, and vector databases Experience implementing data quality frameworks and performance optimization Knowledge of model deployment pipelines and monitoring best practices
Posted Date not available
10.0 - 20.0 years
25 - 40 Lacs
gurugram, bengaluru
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Bengaluru/Gurgaon Payroll: BCforward Work Mode: Hybrid JD Skills: Java; Apache Kafka; AWS Experienced Java engineer with over 10 years of experience with expertise in microservices, event driven architecture, Kafka expertise is a must Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best
Posted Date not available
10.0 - 20.0 years
15 - 30 Lacs
pune, mumbai (all areas)
Hybrid
Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous Preferred candidate profile
Posted Date not available
10.0 - 15.0 years
35 - 40 Lacs
pune
Work from Office
Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kafka clusters with high availability. Implement Kafka MirrorMaker for cross-site replication and disaster recovery readiness. Integrate Kafka with upstream flow record sources using IPFIX-compatible plugins. Configure Kafka topics, partitions, replication, and retention policies based on data flow requirements. Set up TLS/SSL encryption, Kerberos authentication, and access control using Apache Ranger. Monitor Kafka performance using Prometheus, Grafana, or Cloudera Manager and ensure proactive alerting. Perform capacity planning, cluster upgrades, patching, and performance tuning. Ensure audit logging, compliance with enterprise security standards, and integration with SIEM tools. Collaborate with solution architects and Kafka developers to align infrastructure with data pipeline needs. Maintain operational documentation, SOPs, and support SIT/UAT and production rollout activities. Preferred candidate profile Proven experience in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Strong understanding of IPFIX, nProbe Cento, and network flow data ingestion. Hands-on experience with Apache Spark (Structured Streaming) and modern data lake or DWH platforms. Familiarity with Cloudera Data Platform, HDFS, YARN, Ranger, and Knox. Deep knowledge of data security protocols, encryption, and governance frameworks. Excellent communication, documentation, and stakeholder management skills.
Posted Date not available
3.0 - 8.0 years
10 - 15 Lacs
bengaluru
Work from Office
Job Type: Contract Experience Level: 3+ Years Job Overview: We are seeking an experienced Data Engineer to join our dynamic team. As a Data Engineer, you will be responsible for designing, building, and maintaining data pipelines, processing large-scale datasets, and ensuring data availability for analytics. The ideal candidate will have a strong background in distributed systems, database design, and data engineering practices, with hands-on experience working with modern data technologies. Key Responsibilities: Design, implement, and optimize data pipelines using tools like Spark, Kafka, and Airflow to handle large-scale data processing and ETL tasks. Work with various data storage systems (e.g., PostgreSQL, MySQL, NoSQL databases) to ensure efficient and reliable data storage and retrieval. Collaborate with data scientists, analysts, and other stakeholders to design solutions that meet business needs and data requirements. Develop and maintain robust, scalable, and efficient data architectures and data warehousing solutions. Process structured and unstructured data from diverse sources, ensuring data is cleansed, transformed, and loaded effectively. Optimize query performance and troubleshoot database issues to ensure high data availability and minimal downtime. Implement data governance practices to ensure data integrity, security, and compliance. Participate in code reviews, knowledge sharing, and continuous improvement of team processes. Required Skills & Experience: Minimum of 3+ years of relevant hands-on experience in data engineering. Extensive experience with distributed systems (e.g., Apache Spark, Apache Kafka) for large-scale data processing. Proficiency in SQL and experience working with relational databases like PostgreSQL, MySQL, and NoSQL technologies. Strong understanding of data warehousing concepts, ETL processes, and data pipeline design. Experience building and managing data pipelines using Apache Airflow or similar orchestration tools. Hands-on experience in data modeling, schema design, and optimizing database performance. Solid understanding of cloud-based data solutions (e.g., AWS, GCP, Azure) and familiarity with cloud-native data tools is a plus. Ability to work collaboratively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders. Preferred Skills: Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data lakes, data mesh, or data fabric architectures. Knowledge of machine learning pipelines or frameworks is a plus. Experience with CI/CD pipelines for data engineering workflows. Education: Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Mode of Work: 3 days work from office/2 days work from home.
Posted Date not available
10.0 - 13.0 years
30 - 40 Lacs
pune
Work from Office
Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kafka clusters with high availability. Implement Kafka MirrorMaker for cross-site replication and disaster recovery readiness. Integrate Kafka with upstream flow record sources using IPFIX-compatible plugins. Configure Kafka topics, partitions, replication, and retention policies based on data flow requirements. Set up TLS/SSL encryption, Kerberos authentication, and access control using Apache Ranger. Monitor Kafka performance using Prometheus, Grafana, or Cloudera Manager and ensure proactive alerting. Perform capacity planning, cluster upgrades, patching, and performance tuning. Ensure audit logging, compliance with enterprise security standards, and integration with SIEM tools. Collaborate with solution architects and Kafka developers to align infrastructure with data pipeline needs. Maintain operational documentation, SOPs, and support SIT/UAT and production rollout activities. Preferred candidate profile Proven experience in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Strong understanding of IPFIX, nProbe Cento, and network flow data ingestion. Hands-on experience with Apache Spark (Structured Streaming) and modern data lake or DWH platforms. Familiarity with Cloudera Data Platform, HDFS, YARN, Ranger, and Knox. Deep knowledge of data security protocols, encryption, and governance frameworks. Excellent communication, documentation, and stakeholder management skills.
Posted Date not available
6.0 - 10.0 years
16 - 30 Lacs
pune, chennai
Hybrid
Key Responsibilities: Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines using Scala and Apache Spark for large-scale batch and real-time data processing. Real-time Streaming: Develop and manage high-throughput, low-latency data ingestion and streaming applications using Apache Kafka (producers, consumers, Kafka Streams, or ksqlDB where applicable). Spark Expertise: Apply in-depth knowledge of Spark internals, Spark SQL, DataFrames API, and RDDs. Optimize Spark jobs for performance, efficiency, and resource utilization through meticulous tuning (e.g., partitioning, caching, shuffle optimizations). Data Modeling & SQL: Design and implement efficient data models for various analytical workloads (e.g., dimensional modeling, star/snowflake schemas, data lakehouse architectures). Write complex SQL queries for data extraction, transformation, and validation. Data Quality & Governance: Implement and enforce data quality checks, validation rules, and data governance standards within pipelines to ensure accuracy, completeness, and consistency of data. Performance Monitoring & Troubleshooting: Monitor data pipeline performance, identify bottlenecks, and troubleshoot complex issues in production environments. Collaboration: Work closely with data architects, data scientists, data analysts, and cross-functional engineering teams to understand data requirements, define solutions, and deliver high-quality data products. Code Quality & Best Practices: Write clean, maintainable, and well-tested code. Participate in code reviews, contribute to architectural discussions, and champion data engineering best practices. Documentation: Create and maintain comprehensive technical documentation, including design specifications, data flow diagrams, and operational procedures. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years of hands-on experience as a Data Engineer or a similar role focused on Big Data. Expert-level proficiency in Scala for developing robust and scalable data applications. Strong hands-on experience with Apache Spark, including Spark Core, Spark SQL, and DataFrames API. Proven ability to optimize Spark jobs. Solid experience with Apache Kafka for building real-time data streaming solutions (producer, consumer APIs, stream processing concepts). Advanced SQL skills for data manipulation, analysis, and validation. Experience with distributed file systems (e.g., HDFS) and object storage (e.g., Amazon S3, Azure Data Lake Storage, Google Cloud Storage). Familiarity with data warehousing concepts and methodologies. Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and debugging skills. Strong communication and collaboration abilities, with a passion for building data solutions.
Posted Date not available
10.0 - 16.0 years
3 - 8 Lacs
pune
Work from Office
Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence. Required Skills & Qualifications: Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes)
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |