Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
4 - 8 Lacs
surat
Work from Office
Responsibilities : - Hands-on development in Golang to deliver trustworthy and smooth functionalities to our users - Monitor, debug, and fix issues in production at high velocity based on user impact - Maintain good code coverage for all new development, with well-written and testable code - Write and maintain clean documentation for software services - Integrate software components into a fully functional software system - Comply with project plans with a sharp focus on delivery timelines Requirement : - Bachelor's degree in computer science, information technology, or a similar field - Must have 3+ years of experience in developing highly scalable, performant web applications - Strong problem-solving skills and experience in application debugging - Hands-on experience of Restful services development using Golang - Hands-on working experience with database; SQL (PostgreSQL / MySQL) - Working experience of message streaming/queuing systems like Apache Kafka, RabbitMQ, - Cloud experience with Amazon Web Services (AWS) - Experience with Serverless Architectures (AWS) would be a plus - Hands-on experience with API / Echo framework
Posted Date not available
5.0 - 8.0 years
30 - 45 Lacs
bengaluru
Work from Office
We are seeking a Senior Software Engineer with strong expertise in Java, Apache Kafka, and Angular to design, develop, and maintain high-performance, scalable enterprise applications. The ideal candidate will have hands-on experience in distributed systems, event-driven architectures, and building rich front-end applications. Key Responsibilities : - Design, develop, and maintain backend services using Java (Java 8+ / Spring Boot). Develop and maintain real-time data streaming solutions using Apache Kafka. Create intuitive, responsive, and dynamic UI components using Angular (Angular 10+). Collaborate with architects, business analysts, and other engineers to define technical solutions. Implement RESTful APIs and integrate them with front-end and external systems. Optimize application performance, scalability, and reliability. Write clean, maintainable, and well-tested code. Participate in code reviews and mentor junior developers. Work in an Agile/Scrum environment, participating in sprint planning, stand-ups, and retrospectives. Troubleshoot production issues and ensure timely resolution. Required Skills & Qualifications : - Bachelors degree in Computer Science, Engineering, or related field. 5-8+ years of professional software development experience. Strong proficiency in Java (Java 8 or above) and Spring Boot. Hands-on experience with Apache Kafka for real-time event streaming. Proficiency in Angular (preferably Angular 10+) with TypeScript, HTML5, CSS3. Experience building RESTful APIs and integrating microservices. Solid understanding of data structures, algorithms, and design patterns. Experience with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Cassandra). Familiarity with CI/CD pipelines (Jenkins, GitLab, or similar).
Posted Date not available
4.0 - 8.0 years
4 - 9 Lacs
hyderabad
Remote
Job Title: Data Engineer GenAI Applications Company: Amzur Technologies Location: Hyderabad / Visakhapatnam / Remote (India) Experience: 48 Years Notice Period: Immediate to 15 Days (Preferred) Employment Type: Full-Time Position Overview We are looking for a skilled and passionate Data Engineer to join our GenAI Applications team. This role offers the opportunity to work at the intersection of traditional data engineering and cutting-edge AI/ML systems, helping us build scalable, cloud-native data infrastructure to support innovative Generative AI solutions. What We’re Looking For – Required Skills & Experience 4–8 years of experience in data engineering or related fields. Strong programming skills in Python and SQL , with experience in large-scale data processing. Proficient with cloud platforms (AWS, Azure, GCP) and native data services. Experience with open-source tools such as Apache NiFi, MLflow, or similar platforms. Hands-on experience with Apache Spark , Kafka , and Airflow . Skilled in working with both SQL and NoSQL databases , including performance tuning. Familiarity with modern data warehouses : Snowflake, Redshift, or BigQuery. Proven experience building scalable pipelines for batch and real-time processing. Experience implementing CI/CD pipelines and performance optimization in data workflows. Key Responsibilities Data Pipeline Development Design and optimize robust, scalable data pipelines for AI/ML model training and inference Enable batch and real-time data processing using big data technologies Collaborate with GenAI engineers to understand and meet data requirements Cloud Infrastructure & Tools Build and manage cloud-native data infrastructure using AWS, Azure, or Google Cloud Implement Infrastructure as Code (IaC) using tools like Terraform or CloudFormation Ensure data reliability through monitoring and alerting system Preferred Skills Understanding of machine learning workflows and MLOps practices Familiarity with Generative AI concepts such as LLMs, RAG systems, and vector databases Experience implementing data quality frameworks and performance optimization Knowledge of model deployment pipelines and monitoring best practices
Posted Date not available
10.0 - 20.0 years
25 - 40 Lacs
gurugram, bengaluru
Hybrid
Greetings from BCforward INDIA TECHNOLOGIES PRIVATE LIMITED. Contract To Hire(C2H) Role Location: Bengaluru/Gurgaon Payroll: BCforward Work Mode: Hybrid JD Skills: Java; Apache Kafka; AWS Experienced Java engineer with over 10 years of experience with expertise in microservices, event driven architecture, Kafka expertise is a must Please share your Updated Resume, PAN card soft copy, Passport size Photo & UAN History. Interested applicants can share updated resume to g.sreekanth@bcforward.com Note: Looking for Immediate to 30-Days joiners at most. All the best
Posted Date not available
10.0 - 20.0 years
15 - 30 Lacs
pune, mumbai (all areas)
Hybrid
Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous Preferred candidate profile
Posted Date not available
10.0 - 15.0 years
35 - 40 Lacs
pune
Work from Office
Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kafka clusters with high availability. Implement Kafka MirrorMaker for cross-site replication and disaster recovery readiness. Integrate Kafka with upstream flow record sources using IPFIX-compatible plugins. Configure Kafka topics, partitions, replication, and retention policies based on data flow requirements. Set up TLS/SSL encryption, Kerberos authentication, and access control using Apache Ranger. Monitor Kafka performance using Prometheus, Grafana, or Cloudera Manager and ensure proactive alerting. Perform capacity planning, cluster upgrades, patching, and performance tuning. Ensure audit logging, compliance with enterprise security standards, and integration with SIEM tools. Collaborate with solution architects and Kafka developers to align infrastructure with data pipeline needs. Maintain operational documentation, SOPs, and support SIT/UAT and production rollout activities. Preferred candidate profile Proven experience in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Strong understanding of IPFIX, nProbe Cento, and network flow data ingestion. Hands-on experience with Apache Spark (Structured Streaming) and modern data lake or DWH platforms. Familiarity with Cloudera Data Platform, HDFS, YARN, Ranger, and Knox. Deep knowledge of data security protocols, encryption, and governance frameworks. Excellent communication, documentation, and stakeholder management skills.
Posted Date not available
3.0 - 8.0 years
10 - 15 Lacs
bengaluru
Work from Office
Job Type: Contract Experience Level: 3+ Years Job Overview: We are seeking an experienced Data Engineer to join our dynamic team. As a Data Engineer, you will be responsible for designing, building, and maintaining data pipelines, processing large-scale datasets, and ensuring data availability for analytics. The ideal candidate will have a strong background in distributed systems, database design, and data engineering practices, with hands-on experience working with modern data technologies. Key Responsibilities: Design, implement, and optimize data pipelines using tools like Spark, Kafka, and Airflow to handle large-scale data processing and ETL tasks. Work with various data storage systems (e.g., PostgreSQL, MySQL, NoSQL databases) to ensure efficient and reliable data storage and retrieval. Collaborate with data scientists, analysts, and other stakeholders to design solutions that meet business needs and data requirements. Develop and maintain robust, scalable, and efficient data architectures and data warehousing solutions. Process structured and unstructured data from diverse sources, ensuring data is cleansed, transformed, and loaded effectively. Optimize query performance and troubleshoot database issues to ensure high data availability and minimal downtime. Implement data governance practices to ensure data integrity, security, and compliance. Participate in code reviews, knowledge sharing, and continuous improvement of team processes. Required Skills & Experience: Minimum of 3+ years of relevant hands-on experience in data engineering. Extensive experience with distributed systems (e.g., Apache Spark, Apache Kafka) for large-scale data processing. Proficiency in SQL and experience working with relational databases like PostgreSQL, MySQL, and NoSQL technologies. Strong understanding of data warehousing concepts, ETL processes, and data pipeline design. Experience building and managing data pipelines using Apache Airflow or similar orchestration tools. Hands-on experience in data modeling, schema design, and optimizing database performance. Solid understanding of cloud-based data solutions (e.g., AWS, GCP, Azure) and familiarity with cloud-native data tools is a plus. Ability to work collaboratively with cross-functional teams and communicate complex technical concepts to non-technical stakeholders. Preferred Skills: Experience with containerization and orchestration tools such as Docker and Kubernetes. Familiarity with data lakes, data mesh, or data fabric architectures. Knowledge of machine learning pipelines or frameworks is a plus. Experience with CI/CD pipelines for data engineering workflows. Education: Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent practical experience. Mode of Work: 3 days work from office/2 days work from home.
Posted Date not available
10.0 - 13.0 years
30 - 40 Lacs
pune
Work from Office
Experience Required : 10+ years overall, with 5+ years in Kafka infrastructure management and operations. Must have successfully deployed and maintained Kafka clusters in production environments, with proven experience in securing, monitoring, and scaling Kafka for enterprise-grade data streaming. Overview : We are seeking an experienced Kafka Administrator to lead the deployment, configuration, and operational management of Apache Kafka clusters supporting real-time data ingestion pipelines. The role involves ensuring secure, scalable, and highly available Kafka infrastructure for streaming flow records into centralized data platforms. Role & responsibilities Architect and deploy Apache Kafka clusters with high availability. Implement Kafka MirrorMaker for cross-site replication and disaster recovery readiness. Integrate Kafka with upstream flow record sources using IPFIX-compatible plugins. Configure Kafka topics, partitions, replication, and retention policies based on data flow requirements. Set up TLS/SSL encryption, Kerberos authentication, and access control using Apache Ranger. Monitor Kafka performance using Prometheus, Grafana, or Cloudera Manager and ensure proactive alerting. Perform capacity planning, cluster upgrades, patching, and performance tuning. Ensure audit logging, compliance with enterprise security standards, and integration with SIEM tools. Collaborate with solution architects and Kafka developers to align infrastructure with data pipeline needs. Maintain operational documentation, SOPs, and support SIT/UAT and production rollout activities. Preferred candidate profile Proven experience in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Strong understanding of IPFIX, nProbe Cento, and network flow data ingestion. Hands-on experience with Apache Spark (Structured Streaming) and modern data lake or DWH platforms. Familiarity with Cloudera Data Platform, HDFS, YARN, Ranger, and Knox. Deep knowledge of data security protocols, encryption, and governance frameworks. Excellent communication, documentation, and stakeholder management skills.
Posted Date not available
6.0 - 10.0 years
16 - 30 Lacs
pune, chennai
Hybrid
Key Responsibilities: Data Pipeline Development: Design, build, and maintain robust, scalable, and efficient ETL/ELT data pipelines using Scala and Apache Spark for large-scale batch and real-time data processing. Real-time Streaming: Develop and manage high-throughput, low-latency data ingestion and streaming applications using Apache Kafka (producers, consumers, Kafka Streams, or ksqlDB where applicable). Spark Expertise: Apply in-depth knowledge of Spark internals, Spark SQL, DataFrames API, and RDDs. Optimize Spark jobs for performance, efficiency, and resource utilization through meticulous tuning (e.g., partitioning, caching, shuffle optimizations). Data Modeling & SQL: Design and implement efficient data models for various analytical workloads (e.g., dimensional modeling, star/snowflake schemas, data lakehouse architectures). Write complex SQL queries for data extraction, transformation, and validation. Data Quality & Governance: Implement and enforce data quality checks, validation rules, and data governance standards within pipelines to ensure accuracy, completeness, and consistency of data. Performance Monitoring & Troubleshooting: Monitor data pipeline performance, identify bottlenecks, and troubleshoot complex issues in production environments. Collaboration: Work closely with data architects, data scientists, data analysts, and cross-functional engineering teams to understand data requirements, define solutions, and deliver high-quality data products. Code Quality & Best Practices: Write clean, maintainable, and well-tested code. Participate in code reviews, contribute to architectural discussions, and champion data engineering best practices. Documentation: Create and maintain comprehensive technical documentation, including design specifications, data flow diagrams, and operational procedures. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 3+ years of hands-on experience as a Data Engineer or a similar role focused on Big Data. Expert-level proficiency in Scala for developing robust and scalable data applications. Strong hands-on experience with Apache Spark, including Spark Core, Spark SQL, and DataFrames API. Proven ability to optimize Spark jobs. Solid experience with Apache Kafka for building real-time data streaming solutions (producer, consumer APIs, stream processing concepts). Advanced SQL skills for data manipulation, analysis, and validation. Experience with distributed file systems (e.g., HDFS) and object storage (e.g., Amazon S3, Azure Data Lake Storage, Google Cloud Storage). Familiarity with data warehousing concepts and methodologies. Experience with version control systems (e.g., Git). Excellent problem-solving, analytical, and debugging skills. Strong communication and collaboration abilities, with a passion for building data solutions.
Posted Date not available
10.0 - 16.0 years
3 - 8 Lacs
pune
Work from Office
Overview: We are looking for a Kafka Developer to design and implement real-time data ingestion pipelines using Apache Kafka. The role involves integrating with upstream flow record sources, transforming and validating data, and streaming it into a centralized data lake for analytics and operational intelligence. Required Skills & Qualifications: Proven experience in developing Kafka producers and consumers for real-time data ingestion pipelines. Strong hands-on expertise in Apache Kafka, Kafka Connect, Kafka Streams, and Schema Registry. Proficiency in Apache Spark (Structured Streaming) for real-time data transformation and enrichment. Solid understanding of IPFIX, NetFlow, and network flow data formats; experience integrating with nProbe Cento is a plus. Experience with Avro, JSON, or Protobuf for message serialization and schema evolution. Familiarity with Cloudera Data Platform components such as HDFS, Hive, YARN, and Knox. Experience integrating Kafka pipelines with data lakes or warehouses using Parquet or Delta formats. Strong programming skills in Scala, Java, or Python for stream processing and data engineering tasks. Knowledge of Kafka security protocols including TLS/SSL, Kerberos, and access control via Apache Ranger. Experience with monitoring and logging tools such as Prometheus, Grafana, and Splunk. Understanding of CI/CD pipelines, Git-based workflows, and containerization (Docker/Kubernetes)
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
54024 Jobs | Dublin
Wipro
24262 Jobs | Bengaluru
Accenture in India
18733 Jobs | Dublin 2
EY
17079 Jobs | London
Uplers
12548 Jobs | Ahmedabad
IBM
11704 Jobs | Armonk
Amazon
11059 Jobs | Seattle,WA
Bajaj Finserv
10656 Jobs |
Accenture services Pvt Ltd
10587 Jobs |
Oracle
10506 Jobs | Redwood City