Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 5.0 years
15 - 22 Lacs
Bengaluru
Remote
Role & responsibilities Design real-time data pipelines for structured and unstructured sources. Collaborate with analysts and data scientists to create impactful data solutions. Continuously improve data infrastructure based on team feedback. Take full ownership of complex data problems and iterate quickly. Promote strong documentation and engineering best practices. Monitor, detect, and fix data quality issues with custom tools. Preferred candidate profile Experience with big data tools like Spark, Hadoop, Hive, and Kafka. Proficient in SQL and working with relational databases. Hands-on experience with cloud platforms (AWS, GCP, or Azure). Familiar with workflow tools like Airflow.
Posted 1 month ago
5.0 - 8.0 years
20 - 35 Lacs
Chennai
Remote
We are seeking a skilled and motivated individual to join our team as a Real-Time Data Streaming Engineer. In this role, you will be responsible for designing, developing, and maintaining real-time data streaming applications using Apache Kafka. You will collaborate with cross-functional teams to integrate Kafka-based solutions into existing systems and ensure smooth data streaming and processing. Additionally, you will monitor and optimize Kafka clusters to ensure high availability, performance, and scalability. You will implement data pipelines and streaming processes that support business analytics and operational needs, while troubleshooting and resolving any issues that arise. Ideal candidates will have a strong foundation in Apache Kafka, real-time data streaming, and proficiency in Java, Scala, or Python, as well as a solid understanding of distributed systems and microservices architecture. Responsibilities: Design, develop, and maintain real-time data streaming applications using Apache Kafka. Collaborate with cross-functional teams to integrate Kafka solutions into existing systems. Monitor and optimize Kafka clusters to ensure high availability and performance. Implement data pipelines and streaming processes to support business analytics and operations. Troubleshoot and resolve issues related to data streaming and processing. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience with Apache Kafka and real-time data streaming. Proficiency in programming languages such as Java, Scala, or Python. Familiarity with distributed systems and microservices architecture. Strong problem-solving skills and the ability to work collaboratively in a team environment. Understanding of SOA, Object-oriented analysis and design, or client/server systems. Expert knowledge in REST, JSON, XML, SOAP, WSDL, RAML, YAML. Hands-on experience in large scale SOA design, development and deployment. Experience with API management technology Experience working with continuous integration and continuous deliver tools (CI/CD) and processes.
Posted 2 months ago
4.0 - 8.0 years
10 - 20 Lacs
Pune, Delhi / NCR, Mumbai (All Areas)
Hybrid
Job Title: Data Engineer - Ingestion, Storage & Streaming (Confluent Kafka) Job Summary: As a Data Engineer specializing in Ingestion, Storage, and Streaming, you will design, implement, and maintain robust, scalable, and high-performance data pipelines for the efficient flow of data through our systems. You will work with Confluent Kafka to build real-time data streaming platforms, ensuring high availability and fault tolerance. You will also ensure that data is ingested, stored, and processed efficiently and in real-time to provide immediate insights. Key Responsibilities: Kafka-Based Streaming Solutions: Design, implement, and manage scalable and fault-tolerant data streaming platforms using Confluent Kafka. Develop real-time data streaming applications to support business-critical processes. Implement Kafka producers and consumers for ingesting data from various sources. Handle message brokering, processing, and event streaming within the platform. Ingestion & Data Integration: Build efficient data ingestion pipelines to bring real-time and batch data from various data sources into Kafka. Ensure smooth data integration across Kafka topics and handle multi-source data feeds. Develop and optimize connectors for data ingestion from diverse systems (e.g., databases, external APIs, cloud storage). Data Storage and Management: Manage and optimize data storage solutions in conjunction with Kafka, including topics, partitions, retention policies, and data compression. Work with distributed storage technologies to store large volumes of structured and unstructured data, ensuring accessibility and compliance. Implement strategies for schema management, data versioning, and data governance. Data Streaming & Processing: Leverage Kafka Streams and other stream processing frameworks (e.g., Apache Flink, ksqlDB) to process real-time data and provide immediate analytics. Build and optimize data processing pipelines to transform, filter, aggregate, and enrich streaming data. Monitoring, Optimization, and Security: Set up and manage monitoring tools to track the performance of Kafka clusters, ingestion, and streaming pipelines. Troubleshoot and resolve issues related to data flows, latency, and failures. Ensure data security and compliance by enforcing appropriate data access policies and encryption techniques. Collaboration and Documentation: Collaborate with data scientists, analysts, and other engineers to align data systems with business objectives. Document streaming architecture, pipeline workflows, and data governance processes to ensure system reliability and scalability. Provide regular updates on streaming and data ingestion pipeline performance and improvements to stakeholders. Required Skills & Qualifications: Experience: 3+ years of experience in data engineering, with a strong focus on Kafka, data streaming, ingestion, and storage solutions. Hands-on experience with Confluent Kafka, Kafka Streams, and related Kafka ecosystem tools. Experience with stream processing and real-time analytics frameworks (e.g., ksqlDB, Apache Flink). Technical Skills: Expertise in Kafka Connect, Kafka Streams, and Kafka producer/consumer APIs. Proficient in data ingestion and integration techniques from diverse sources (databases, APIs, etc.). Strong knowledge of cloud data storage and distributed systems. Experience with programming languages like Java, Scala, or Python for Kafka integration and stream processing. Familiarity with tools such as Apache Spark, Flink, Hadoop, or other data processing frameworks. Experience with containerization and orchestration tools such as Docker, Kubernetes.
Posted 2 months ago
6.0 - 10.0 years
15 - 18 Lacs
Hyderabad, Bengaluru
Work from Office
Client is looking for a strong Java candidate with the following skills. Spring Webflux and streaming knowledge is a must and key thing that they are looking for. Here is the overall JD for the position. RDBMS No At least 1 year Is Required CI/CD 2-5 Years Is Required Cloud Computing 2-5 Years Is Required Core Java 5-10 Years Is Required Kubernetes -2-5 Years Is Required microservices - 2-5 Years Is Required MongoDB - At least 1 year Nice To Have NoSQL -At least 1 year Nice To Have python - At least 1 year Is Required Spring Boot - 5-10 Years Is Required Spring Data - 2-5 Years Is Required Spring Security - 2-5 Years Is Required Spring Webflux - At least 1 year Is Required Stream processing - At least 1 year Is Required Java 17 - 2-5 Years Is Required Apache Kafka - At least 1 year Is Required Apache SOLR - At least 1 year Is Required Expertise with solution design and enterprise large scale applications development In-depth knowledge of integration patterns, integration technologies and integration platforms Experience with Queuing related technologies like Kafka Good hands-on experience to design and build cloud ready application Good programming skills in Java, Python etc Proficiency of dev/build tools: git, maven, gradle Experience with the modern NoSQL/Graph DB/Data Streaming technologies is a plus Good understanding of Agile software development methodology Location: hyd,mangalore.bubaneshawr,trivendrum
Posted 2 months ago
3.0 - 8.0 years
10 - 20 Lacs
Chennai
Remote
Position Title: Apache Flink Engineer Open Position: 3 Employment Type: Permanent; Full-Time. Location: Chennai Experience: 3+yrs Skills required: Confluent Kafka or Kafka Apache Spark or Apache Flink
Posted 2 months ago
2 - 6 years
15 - 20 Lacs
Hyderabad
Remote
Responsibilities Maintain and improve data pipeline performance. Build data products for batch and real-time business needs. Create and track metrics for analytics. Troubleshoot and fix data-related issues. Stay updated with new data engineering tools and trends. Optimize data pipelines for better performance and reliability. Ensure data quality and integrity. Work with cross-functional teams to integrate data solutions. Advise engineering teams on data practices. Requirements Hands-on experience with Data Streaming technologies (Kafka, Flink). Strong expertise in Data Warehousing and building scalable solutions. Proficient in NoSQL (e.g., HBase) and distributed storage systems like HDFS. Experience with real-time and batch data processing pipelines. Familiar with cloud platforms (AWS, GCP, Azure). Perks and Benefits Medical Insurance Internet Reimbursement Flexible working hours
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough