Jobs
Interviews

82 Kafka Streams Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Kafka Integration. Experience5-8 Years.

Posted 3 weeks ago

Apply

8.0 - 10.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Role Purpose The purpose of the role is to create exceptional integration architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. Do 1. Define integration architecture for the new deals/ major change requests in existing deals a. Creates an enterprise-wide integration architecture that ensures that systems are seamlessly integrated while being scalable, reliable, and manageable. b. Provide solutioning for digital integration for RFPs received from clients and ensure overall design assurance i. Analyse applications, exchange points, data formats, connectivity requirements, technology environment, enterprise specifics, client requirements to set an integration solution design framework/ architecture ii. Provide technical leadership to the design, development and implementation of integration solutions through thoughtful use of modern technology iii. Define and understand current state integration solutions and identify improvements, options & tradeoffs to define target state solutions iv. Clearly articulate, document and use integration patterns, best practices and processes. v. Evaluate and recommend products and solutions to integrate with overall technology ecosystem vi. Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution vii. Document integration architecture covering logical, deployment and data views mentioning all the artefacts in detail viii. Validate the integration solution/ prototype from technology, cost structure and customer differentiation point of view ix. Identify problem areas and perform root cause analysis of integration architectural design and solutions and provide relevant solutions to the problem x. Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture xi. Tracks industry integration trends and relates these to planning current and future IT needs c. Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendations d. Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture. e. Identifies implementation risks and potential impacts. 2. Enable Delivery Teams by providing optimal delivery solutions/ frameworks a. Build and maintain relationships with technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor b. Develops and establishes relevant integration metrics (KPI/SLA) to drive results c. Identify risks related to integration and prepares a risk mitigation plan d. Ensure quality assurance of the integration architecture or design decisions and provides technical mitigation support to the delivery teams e. Leads the development and maintenance of integration framework and related artefacts f. Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams g. Ensures integration architecture principles, patterns and standards are consistently applied to all the projects h. Ensure optimal Client Engagement i. Support pre-sales team while presenting the entire solution design and its principles to the client ii. Coordinate with the client teams to ensure all requirements are met and create an effective integration solution iii. Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 3. Competency Building and Branding a. Ensure completion of necessary trainings and certifications on integration middleware b. Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas and solve new customer problems based on market and customer research c. Develop and present a point of view of Wipro on digital integration by writing white papers, blogs etc. d. Help in attaining market recognition through analyst rankings, client testimonials and partner credits e. Be the voice of Wipros Thought Leadership by speaking in forums (internal and external) f. Mentor developers, designers and Junior architects in the project for their further career development and enhancement g. Contribute to the integration practice by conducting selection interviews etc. 4. Team Management a. Resourcing i. Anticipating new talent requirements as per the market/ industry trends or client requirements ii. Support in hiring adequate and right resources for the team through conducting interviews b. Talent Management i. Ensure adequate onboarding and training for the team members to enhance capability & effectiveness c. Performance Management i. Provide inputs to project manager in setting appraisal objectives for the team, conduct timely performance reviews and provide constructive feedback to own direct reports (if present) Deliver No. Performance Parameter Measure 1. Support sales team to create wins % of proposals with Quality Index >7, timely support of the proposals, identifying opportunities/ leads to sell services within/ outside account (lead generation), no. of proposals led 2. Delivery Responsibility in Projects/Programs and Accounts (a) Solution acceptance of Integration architecture (from client and/or internal Wipro architecture leadership), and (b) effective implementation of integration-approach/solution component by way of sufficient integration-design, methods guidelines and tech-know how of team 3. Delivery support CSAT, delivery as per cost, quality and timelines, Identify and develop reusable components, Recommend tools for reuse, automation for improved productivity and reduced cycle times 4. Capability development % trainings and certifications completed, increase in ACE certifications, thought leadership content developed (white papers, Wipro PoVs) Mandatory Skills: Kafka Integration. Experience8-10 Years.

Posted 3 weeks ago

Apply

10.0 - 20.0 years

30 - 40 Lacs

Hyderabad

Work from Office

Job Description Kafka/Integration Architect Position brief: Kafka/Integration Architect is responsible for designing, implementing, and managing Kafka-based streaming data pipelines and messaging solutions. This role involves configuring, deploying, and monitoring Kafka clusters to ensure the high availability and scalability of data streaming services. The Kafka Architect collaborates with cross-functional teams to integrate Kafka into various applications and ensures optimal performance and reliability of the data infrastructure. Kafka/Integration Architect play a critical role in driving data-driven decision-making and enabling real-time analytics, contributing directly to the company’s agility, operational efficiency and ability to respond quickly to market changes. Their work supports key business initiatives by ensuring that data flows seamlessly across the organization, empowering teams with timely insights and enhancing the customer experience. Location: Hyderabad Primary Role & Responsibilities: Design, implement, and manage Kafka-based data pipelines and messaging solutions to support critical business operations and enable real-time data processing. Configure, deploy, and maintain Kafka clusters, ensuring high availability and scalability to maximize uptime and support business growth. Monitor Kafka performance and troubleshoot issues to minimize downtime and ensure uninterrupted data flow, enhancing decision-making and operational efficiency. Collaborate with development teams to integrate Kafka into applications and services. Develop and maintain Kafka connectors such as JDBC, MongoDB, and S3 connectors, along with topics and schemas, to streamline data ingestion from databases, NoSQL data stores, and cloud storage, enabling faster data insights. Implement security measures to protect Kafka clusters and data streams, safeguarding sensitive information and maintaining regulatory compliance. Optimize Kafka configurations for performance, reliability, and scalability. Automate Kafka cluster operations using infrastructure-as-code tools like Terraform or Ansible to increase operational efficiency and reduce manual overhead. Provide technical support and guidance on Kafka best practices to development and operations teams, enhancing their ability to deliver reliable, high-performance applications. Maintain documentation of Kafka environments, configurations, and processes to ensure knowledge transfer, compliance, and smooth team collaboration. Stay updated with the latest Kafka features, updates, and industry best practices to continuously improve data infrastructure and stay ahead of industry trends. Required Soft Skills: Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Ability to translate business requirements into technical solutions. Working Experience and Qualification: Education: Bachelor’s or master’s degree in computer science, Information Technology or related field. Experience: Proven experience of 8-10 years as a Kafka Architect or in a similar role. Skills: Strong knowledge of Kafka architecture, including brokers, topics, partitions and replicas. Experience with Kafka security, including SSL, SASL, and ACLs. Proficiency in configuring, deploying, and managing Kafka clusters in cloud and on-premises environments. Experience with Kafka stream processing using tools like Kafka Streams, KSQL, or Apache Flink. Solid understanding of distributed systems, data streaming and messaging patterns. Proficiency in Java, Scala, or Python for Kafka-related development tasks. Familiarity with DevOps practices, including CI/CD pipelines, monitoring, and logging. Experience with tools like Zookeeper, Schema Registry, and Kafka Connect. Strong problem-solving skills and the ability to troubleshoot complex issues in a distributed environment. Experience with cloud platforms like AWS, Azure, or GCP. Preferred Skills: (Optional) Kafka certification or related credentials, such as: Confluent Certified Administrator for Apache Kafka (CCAAK) Cloudera Certified Administrator for Apache Kafka (CCA-131) AWS Certified Data Analytics – Specialty (with a focus on streaming data solutions) Knowledge of containerization technologies like Docker and Kubernetes. Familiarity with other messaging systems like RabbitMQ or Apache Pulsar. Experience with data serialization formats like Avro, Protobuf, or JSON. Company Profile: WAISL is an ISO 9001:2015, ISO 20000-1:2018, ISO 22301:2019 certified, and CMMI Level 3 Appraised digital transformation partner for businesses across industries with a core focus on aviation and related adjacencies. We transform airports and relative ecosystems through digital interventions with a strong service excellence culture. As a leader in our chosen space, we deliver world-class services focused on airports and their related domains, enabled through outcome-focused next-gen digital/technology solutions. At present, WAISL is the primary technology solutions partner for Indira Gandhi International Airport, Delhi, Rajiv Gandhi International Airport, Hyderabad, Manohar International Airport, Goa, Kannur International Airport, Kerala, and Kuwait International Airport, and we expect to soon provide similar services for other airports in India and globally. WAISL, as a digital transformation partner, brings proven credibility in managing and servicing 135+Mn passengers, 80+ airlines, core integration, deployment, and real-time management experience of 2000+ applications vendor agnostically in highly complex technology converging ecosystems. This excellence in managed services delivered by WAISL has enabled its customer airports to be rated amongst the best-in-class service providers by Skytrax and ACI awards, and to win many innovation and excellence awards

Posted 3 weeks ago

Apply

7.0 - 8.0 years

13 - 18 Lacs

Hyderabad

Work from Office

Primary Skills Data Engineer with Python, Kafka Streaming & Spark Years of experience 6 to 10 Years Job Description Key Responsibilities: • Develop real-time data streaming applications using Apache Kafka and Kafka Streams. • Build and optimize large-scale batch and stream processing pipelines with Apache Spark. • Containerize applications and manage deployments using OpenShift and Kubernetes. • Collaborate with DevOps teams to ensure CI/CD pipelines are robust and scalable. • Write unit tests and conduct code reviews to maintain code quality and reliability. • Work closely with Product and Data Engineering teams to understand requirements and translate them into technical solutions. • Troubleshoot and debug production issues across multiple environments. Required qualifications to be successful in this role • Strong programming skills in Java/Python. • Hands-on experience with Apache Kafka, Kafka Streams, and event-driven architecture. • Solid knowledge of Apache Spark (batch and streaming). • Experience with OpenShift, Kubernetes, and container orchestration. • Familiarity with microservices architecture, RESTful APIs, and distributed systems. • Experience with build tools such as Maven or Gradle. • Familiar with Git, Jenkins, CI/CD pipelines, and Agile development practices. • Excellent problem-solving skills and ability to work in a fast-paced environment. Education & Experience: • Bachelor's or Master's degree in Computer Science, Engineering, or related field. • Minimum 6 years of experience in backend development with Java and related technologies. Preferred Skills (Nice to Have): • Knowledge of cloud platforms like AWS, Azure, or GCP. • Understanding of security best practices in cloud-native environments. • Familiarity with SQL/NoSQL databases (e.g., PostgreSQL, Cassandra, MongoDB). • Experience with Scala or Python for Spark jobs is a plus. Location Hyderabad Only Notice period Immediate to 30 Days Shift 1 PM to 10 PM, Initial 8 weeks WFO, Later Hybrid No. of positions 8

Posted 4 weeks ago

Apply

5.0 - 10.0 years

6 - 15 Lacs

Bengaluru

Work from Office

Greetings!!! If you're interested please apply by clicking below link https://bloomenergy.wd1.myworkdayjobs.com/BloomEnergyCareers/job/Bangalore-Karnataka/Staff-Engineer---Streaming-Analytics_JR-19447 Role & responsibilities Our team at Bloom Energy embraces the unprecedented opportunity to change the way companies utilize energy. Our technology empowers businesses and communities to responsibly take charge of their energy. Our energy platform has three key value propositions: resiliency, sustainability, and predictability. We provide infrastructure that is flexible for the evolving net zero ecosystem. We have deployed more than 30,000 fuel cell modules since our first commercial shipments in 2009, sending energy platforms to data centers, hospitals, manufacturing facilities, biotechnology facilities, major retail stores, financial institutions, telecom facilities, utilities, and other critical infrastructure customers around the world. Our mission is to make clean, reliable energy affordable globally. We never stop striving to improve our technology, to expand and improve our company performance, and to develop and support the many talented employees that serve our mission! Role & responsibilities: Assist in developing distributed learning algorithms Responsible for building real-time analytics on cloud and edge devices Responsible for developing scalable data pipelines and analytics tools Solve challenging data and architectural problems using cutting edge technology Cross functional collaboration with data scientists / data engineering / firmware controls teams Skills and Experience: Strong Java/ Scala programming/debugging ability and clear design patterns understanding, Python is a bonus Understanding of Kafka/ Spark / Flink / Hadoop / HBase etc. internals (Hands on experience in one or more preferred) Implementing data wrangling, transformation and processing solutions, demonstrated experience of working with large datasets Knowhow of cloud computing platforms like AWS/GCP/Azure beneficial Exposure to data lakes and data warehousing concepts, SQL, NoSQL databases Working on REST APIs, gRPC are good to have skills Ability to adapt to new technology, concept, approaches, and environment faster Problem-solving and analytical skills Must have a learning attitude and improvement mindset

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 18 Lacs

Pune, Chennai

Work from Office

Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 17 Lacs

Noida, Gurugram, Greater Noida

Work from Office

5+ yrs in Python, Django Microservices Architecture and API development Deploy via Kubernetes Redis for performance tuning Celery for distributed task DBs: PostgreSQL, Time-series Redis, Celery, RabbitMQ/Kafka Microservices architecture exp

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Design and develop Kafka Pipelines. Perform Unit testing of the code and prepare test plans as required. Analyze, design and develop programs in development environment. Support application & jobs in production environment for abends or issues.

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Dubai, Pune, Chennai

Hybrid

Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 19 Lacs

Noida

Work from Office

About the Role We are looking for a Staff EngineerReal-time Data Processing to design and develop highly scalable, low-latency data streaming platforms and processing engines. This role is ideal for engineers who enjoy building core systems and infrastructure that enable mission-critical analytics at scale. Youll work on solving some of the toughest data engineering challenges in healthcare. A Day in the Life Architect, build, and maintain a large-scale real-time data processing platform. Collaborate with data scientists, product managers, and engineering teams to define system architecture and design. Optimize systems for scalability, reliability, and low-latency performance. Implement robust monitoring, alerting, and failover mechanisms to ensure high availability. Evaluate and integrate open-source and third-party streaming frameworks. Contribute to the overall engineering strategy and promote best practices for stream and event processing. Mentor junior engineers and lead technical initiatives. What You Need 8+ years of experience in backend or data engineering roles, with a strong focus on building real-time systems or platforms. Hands-on experience with stream processing frameworks like Apache Flink, Apache Kafka Streams, or Apache Spark Streaming. Proficiency in Java, Scala, or Python or Go for building high-performance services. Strong understanding of distributed systems, event-driven architecture, and microservices. Experience with Kafka, Pulsar, or other distributed messaging systems. Working knowledge of containerization tools like Docker and orchestration tools like Kubernetes. Proficiency in observability tools such as Prometheus, Grafana, OpenTelemetry. Experience with cloud-native architectures and services (AWS, GCP, or Azure). Bachelor's or Masters degree in Computer Science, Engineering, or a related field. Heres What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days. Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition. Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered. Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury. Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 12 Lacs

Bengaluru

Work from Office

We are seeking an experienced Data Engineer to join our dynamic product development team. In this role, you will be responsible for designing, building, and optimizing data pipelines that ensure efficient data processing and insightful analytics. You will work collaboratively with cross-functional teams, including data scientists, software developers, and product managers, to transform raw data into actionable insights while adhering to best practices in data architecture, security, and scalability. Role & responsibilities * Design, build, and maintain scalable ETL processes to ingest, process, and store large datasets. * Collaborate with cross-functional teams to integrate data from various sources, ensuring data consistency and quality. * Leverage Microsoft Azure services for data storage, processing, and analytics, integrating with our CI/CD pipeline on Azure Repos. * Continuously optimize data workflows for performance and scalability, identifying bottlenecks and implementing improvements. * Deploy and monitor ML/GenAI models in production environments. * Develop and enforce data quality standards, data validation checks, and ensure compliance with security and privacy policies. * Work closely with backend developers (PHP/Node/Python) and DevOps teams to support seamless data operations and deployment. * Stay current with industry trends and emerging technologies to continually enhance data strategies and methodologies. Required Skills & Qualifications * Minimum of 4+ years in data engineering or a related field. * In depth understanding of streaming technologies like Kafka, Spark Streaming. * Strong proficiency in SQL, Python, Spark SQL - data manipulation, data processing, and automation. * Solid understanding of ETL/ELT frameworks, data pipeline design, data modelling, data warehousing and data governance principles. * Must have in-depth knowledge of performance tuning/optimizing data processing jobs, debugging time consuming jobs. * Proficient in Azure technologies like ADB, ADF, SQL (capability of writing complex SQL queries), PySpark, Python, Synapse, Fabric, Delta Tables, Unity CatLog. * Deep understanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and data warehousing solutions (e.g., Snowflake, Redshift, Big Query). * Good knowledge of Agile, SDLC/CICD practices and tools with a good understanding of distributed systems. * Proven ability to work effectively in agile/scrum teams, collaborating across disciplines. * Excellent analytical, troubleshooting, problem-solving skills and attention to detail. Preferred candidate profile * Experience with NoSQL databases and big data processing frameworks e.g., Apache Spark. * Knowledge of data visualization and reporting tools. * Strong understanding of data security, governance, and compliance best practices. * Effective communication skills with an ability to translate technical concepts to non-technical stakeholders. * Knowledge of AI-OPS and LLM Data pipelines. Why Join GenXAI? * Innovative Environment: Work on transformative projects in a forward-thinking, collaborative setting. * Career Growth: Opportunities for professional development and advancement within a rapidly growing company. * Cutting-Edge Tools: Gain hands-on experience with industry-leading technologies and cloud platforms. * Collaborative Culture: Join a diverse team where your expertise is valued, and your ideas make an impact.

Posted 1 month ago

Apply

6.0 - 8.0 years

27 - 30 Lacs

Pune, Ahmedabad, Chennai

Work from Office

Technical Skills Must Have: 8+ years overall IT industry experience, with 5+ years in a solution or technical architect role using service and hosting solutions such as private/public cloud IaaS, PaaS and SaaS platforms. 5+ years of hands-on development experience with event driven architecture-based implementation. Achieved one or more of the typical solution and technical architecture certifications e.g. Microsoft, MS Azure Certification, TOGAF, AWS Cloud Certified, SAFe, PMI, and SAP etc. Hand-on experience with: o Claims-based authentication (SAML/OAuth/OIDC), MFA, JIT, and/or RBAC / Ping etc. o Architecting Mission critical technology components with DR capabilities. o Multi-geography, multi-tier service design and management. o Project financial management, solution plan development and product cost estimation. o Supporting peer teams and their responsibilities; such as infrastructure, operations, engineering, info-security. o Configuration management and automation tools such as Azure DevOps, Ansible, Puppet, Octopus, Chef, Salt, etc. o Software development full lifecycle methodologies, patterns, frameworks, libraries and tools. o Relational, graph and/or unstructured data technologies such as SQL Server, Azure SQL, Cosmos, Azure Data Lake, HD Insights, Hadoop, Neo4j etc. o Data management and data governance technologies. o Experience in data movement and transformation technologies. o AI and Machine Learning tools such as Azure ML etc. o Architecting mobile applications that are either independent applications or supplementary addons (to intranet or extranet). o Cloud security controls including tenant isolation, encryption at rest, encryption in transit, key management, vulnerability assessments, application firewalls, SIEM, etc. o Apache Kafka, Confluent Kafka, Kafka Streams, and Kafka Connect. o Proficient in NodeJS, Java, Scala, or Python languages.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Required skills Over 7 years of experience as Full Stack Engineer Experience in selected programming languages (e.g. Python) and Java/J2EE platform Experience UI technologies React/Next JS Experience in building REST API In-depth knowledge of relational databases (e.g. PostgreSQL, MySQL) and NoSQL databases (e.g. MongoDB) Experience in Snowflake and Databricks. Experience with Cloud Tec Experience with Kafka technologies, Azure, Kubernetes, Snowflake, Github, Copilot Experience in large scale implementation of Open-Source Technologies Generative AI, Large Language Models, and Chatbot technologies Strong knowledge of data integration Strong Data Analytics experience with application enablement. Strong Experience in driving Customer Experience Familiarity with agile development Experience in Healthcare Clinical Domains Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Fullstack MERN. Experience5-8 Years.

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain Kafka-based data pipelines for real-time processing. Implement Kafka producer and consumer applications for efficient data flow. Optimize Kafka clusters for performance, scalability, and reliability. Design and manage Grafana dashboards for monitoring Kafka metrics. Integrate Grafana with Elasticsearch, or other data sources. Set up alerting mechanisms in Grafana for Kafka system health monitoring. Collaborate with DevOps, data engineers, and software teams. Ensure security and compliance in Kafka and Grafana implementations. Requirements: 8+ years of experience in configuring Kafka, ElasticSearch and Grafana Strong understanding of Apache Kafka architecture and Grafana visualization. Proficiency in .Net, or Python for Kafka development. Experience with distributed systems and message-oriented middleware. Knowledge of time-series databases and monitoring tools. Familiarity with data serialization formats like JSON. Expertise in Azure platforms and Kafka monitoring tools. Good problem-solving and communication skills. Mandate : Create the Kafka dashboards , Python/.NET Note: Candidate must be immediate joiner.

Posted 1 month ago

Apply

6.0 - 7.0 years

11 - 14 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Location: Remote / Pan India,Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Notice Period: Immediate iSource Services is hiring for one of their client for the position of Java kafka developer. We are seeking a highly skilled and motivated Confluent Certified Developer for Apache Kafka to join our growing team. The ideal candidate will possess a deep understanding of Kafka architecture, development best practices, and the Confluent platform. You will be responsible for designing, developing, and maintaining scalable and reliable Kafka-based data pipelines and applications. Your expertise will be crucial in ensuring the efficient and robust flow of data across our organization. Develop Kafka producers, consumers, and stream processing applications. Implement Kafka Connect connectors and configure Kafka clusters. Optimize Kafka performance and troubleshoot related issues. Utilize Confluent tools like Schema Registry, Control Center, and ksqlDB. Collaborate with cross-functional teams and ensure compliance with data policies. Qualifications: Bachelors degree in Computer Science or related field. Confluent Certified Developer for Apache Kafka certification. Strong programming skills in Java/Python. In-depth Kafka architecture and Confluent platform experience. Experience with cloud platforms and containerization (Docker, Kubernetes) is a plus. Experience with data warehousing and data lake technologies. Experience with CI/CD pipelines and DevOps practices. Experience with Infrastructure as Code tools such as Terraform, or CloudFormation.

Posted 1 month ago

Apply

4.0 - 9.0 years

5 - 13 Lacs

Thane, Goregaon, Mumbai (All Areas)

Work from Office

Opening for Leading Insurance company. **Looking for Immediate Joiner and 30 Days** Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives. Interested candidates shared resume on snehal@topgearconsultants.com

Posted 1 month ago

Apply

8.0 - 13.0 years

5 - 12 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Looking exp in 8+ Yrs exp in Kafka Administrator Mandatory Skill: kSQL DB Developers who must have hands on experience in writing the Ksql queries. Kafka Connect development experience. Kafka Client Stream Applications Developer Confluent Terraform Provider Skill: 8+ years of experience in Development project and Support project experience 3+ years of hands on experience in Kafka Understanding Event Streaming patterns and when to apply these patterns Designing building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka Working with different database solutions for data extraction, updates and insertions. Identity and Access Management space including relevant protocols and standards such as OAuth, OIDC, SAML, LDAP etc. Knowledge of networking protocols such as TCP, HTTP/2, WebSockets etc. Candidate must work in Australia timings [AWST]., Interview mode will be Face to Face Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in

Posted 1 month ago

Apply

3.0 - 7.0 years

9 - 14 Lacs

Gurugram

Remote

Kafka/MSK Linux In-depth understanding of Kafka broker configurations, zookeepers, and connectors Understand Kafka topic design and creation. Good knowledge in replication and high availability for Kafka system ElasticSearch/OpenSearch Perks and benefits PF, ANNUAL BONUS, HEALTH INSURANCE

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Pune

Work from Office

Responsibilities: * Manage Kafka clusters, brokers & messaging architecture * Collaborate with development teams on data pipelines * Monitor Kafka performance & troubleshoot issues Health insurance Provident fund Annual bonus

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Hyderabad, Pune, Gurugram

Work from Office

GSPANN is looking for an experienced Kafka Developer with strong Java skills to join our growing team. If you have hands-on experience with Kafka components and are ready to work in a dynamic, client-facing environment, wed love to hear from you! Key Responsibilities: Develop and maintain Kafka Producers, Consumers, Connectors, kStream, and KTable. Collaborate with stakeholders to gather requirements and deliver customized solutions. Troubleshoot production issues and participate in Agile ceremonies. Optimize system performance and support deployments. Mentor junior team members and ensure coding best practices. Required Skills: 4+ years of experience as a Kafka Developer Proficiency in Java Strong debugging skills (Splunk experience is a plus) Experience in client-facing projects Familiarity with Agile and DevOps practices Good to Have: Knowledge of Google Cloud Platform (Dataflow, BigQuery, Kubernetes) Experience with production support and monitoring tools Ready to join a collaborative and innovative team? Send your CV to heena.ruchwani@gspann.com

Posted 1 month ago

Apply

10.0 - 16.0 years

15 - 30 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Gracenote, a Nielsen company, is dedicated to connecting audiences to the entertainment they love, powering a better media future for all people. Gracenote is the content data business unit of Nielsen that powers innovative entertainment experiences for the worlds leading media companies. Our entertainment metadata and connected IDs deliver advanced content navigation and discovery to connect consumers to the content they love and discover new ones. Gracenotes industry-leading datasets cover TV programs, movies, sports, music and podcasts in 80 countries and 35 languages.Common identifiers Universally adopted by the worlds leading media companies to deliver powerful cross-media entertainment experiences. Machine driven, human validated best-in-class data and images fuel new search and discovery experiences across every screen. Gracenote's Data Organization is a dynamic and innovative group that is essential in delivering business outcomes through data, insights, predictive & prescriptive analytics. An extremely motivated team that values creativity, experimentation through continuous learning in an agile and collaborative manner. From designing, developing and maintaining data architecture that satisfies our business goals to managing data governance and region-specific regulations, the data team oversees the whole data lifecycle. Role Overview: We are seeking an experienced Senior Data Engineer with 10-12 years of experience to join our Video engineering team with Gracenote - a NielsenIQ Company. In this role, you will design, build, and maintain our data processing systems and pipelines. You will work closely with Product managers, Architects, analysts, and other stakeholders to ensure data is accessible, reliable, and optimized for Business, analytical and operational needs. Key Responsibilities: Design, develop, and maintain scalable data pipelines and ETL processes Architect and implement data warehousing solutions and data lakes Optimize data flow and collection for cross-functional teams Build infrastructure required for optimal extraction, transformation, and loading of data Ensure data quality, reliability, and integrity across all data systems Collaborate with data scientists and analysts to help implement models and algorithms Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, etc. Create and maintain comprehensive technical documentation Mentor junior engineers and provide technical leadership Evaluate and integrate new data management technologies and tools Implement Optimization strategies to enable and maintain sub second latency. Oversee Data infrastructure to ensure robust deployment and monitoring of the pipelines and processes. Stay ahead of emerging trends in Data, cloud, integrating new research into practical applications. Mentor and grow a team of junior data engineers. Required qualification and Skills: Expert-level proficiency in Python, SQL, and big data tools (Spark, Kafka, Airflow). Bachelor's degree in Computer Science, Engineering, or related field; Master's degree preferred Expert knowledge of SQL and experience with relational databases (e.g., PostgreSQL, Redshift, TIDB, MySQL, Oracle, Teradata) Extensive experience with big data technologies (e.g., Hadoop, Spark, Hive, Flink) Proficiency in at least one programming language such as Python, Java, or Scala Experience with data modeling, data warehousing, and building ETL pipelines Strong knowledge of data pipeline and workflow management tools (e.g., Airflow, Luigi, NiFi) Experience with cloud platforms (AWS, Azure, or GCP) and their data services. AWS Preferred Hands on Experience with building streaming pipelines with flink, Kafka, Kinesis. Flink Preferred. Understanding of data governance and data security principles Experience with version control systems (e.g., Git) and CI/CD practices Proven leadership skills in grooming data engineering teams. Preferred Skills Experience with containerization and orchestration tools (Docker, Kubernetes) Basic knowledge of machine learning workflows and MLOps Experience with NoSQL databases (MongoDB, Cassandra, etc.) Familiarity with data visualization tools (Tableau, Power BI, etc.) Experience with real-time data processing Knowledge of data governance frameworks and compliance requirements (GDPR, CCPA, etc.) Experience with infrastructure-as-code tools (Terraform, CloudFormation)Role & responsibilities

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Senior Software Engineer - DevOps Bangalore, India Who we are: INVIDI Technologies Corporation is the worlds leading developer of software transforming television all over the world. Our two-time Emmy Award-winning technology is widely deployed by cable, satellite, and telco operators. We provide a device-agnostic solution delivering ads to the right household no matter what program or network you re watching, how youre watching, or whether you re in front of your TV, laptop, cell phone or any other device. INVIDI created the multi-billion-dollar addressable television business that today is growing rapidly globally. INVIDI is right at the heart of the very exciting and fast-paced world of commercial television; companies benefiting from our software include DirecTV and Dish Network, networks such as CBS/Viacom and A&E, advertising agencies such as Ogilvy and Publicis, and advertisers such as Chevrolet and Verizon. INVIDI s world-class technology solutions are known for their flexibility and adaptability. These traits allow INVIDI partners to transform their video content delivery network, revamping legacy systems without significant capital or hardware investments. Our clients count on us to provide superior capabilities, excellent service, and ease of use. The goal of developing a unified video ad tech platform is a big one and the right DevOps Engineer --like you--flourish in INVIDI s creative, inspiring, and supportive culture. It is a demanding, high-energy, and fast-paced environment. INVIDI s developers are self-motivated quick studies, can-do individuals who embrace the challenge of solving difficult and complex problems. About the role: We are a modern agile product organization looking for an excellent DevOps engineer that can support and offload a remote product development team. Our platform handles tens of thousands of requests/second with sub-second response times across the globe. We serve ads to some of the biggest live events in the world, providing reports and forecasts based on billions of log rows. These are some of the complex challenges that make development and operational work at INVIDI interesting and rewarding. To accomplish this, we use the best frameworks and tools out there or, when they are not good enough, we write our own. Most of the code we write is Java or Kotlin on top of Dropwizard, but every problem is unique, and we always evaluate the best tools for the job. We work with technologies such as Kafka, Google Cloud (GKE, Pub/Sub), BigTable, Terraform and Jsonnet and a lot more. The position will report directly to the Technical Manager of Software Development and will be based in our Chennai, India office. Key responsibilities: You will maintain, deploy and operate backend services in Java and Kotlin that are scalable, durable and performant. You will proactively evolve deployment pipelines and artifact generation. You will have a commitment to Kubernetes and infrastructure maintenance. You will troubleshoot incoming issues from support and clients, fixing and resolving what you can You will collaborate closely with peers and product owners in your team. You will help other team members grow as engineers through code review, pairing, and mentoring. Our Requirements: You are an outstanding DevOps Engineer who loves to work with distributed high-volume systems. You care about the craft and cherish the opportunity to work with smart, supportive, and highly motivated colleagues. You are curious; you like to learn new things, mentor and share knowledge with team members. Like us, you strive to handle complexity by keeping things simple and elegant. As a part of the DevOps team, you will be on-call for the services and clusters that the team owns. You are on call for one week, approximately once or twice per month. While on-call, you are required to be reachable by telephone and able to act upon alarm using your laptop. Skills and qualifications: Master s degree in computer science, or equivalent 4+ years of experience in the computer science industry Strong development and troubleshooting skill sets Ability to support a SaaS environment to meet service objectives Ability to collaborate effectively and work well in an Agile environment Excellent oral and written communication skills in English Ability to quickly learn new technologies and work in a fast-paced environment. Highly Preferred: Experience building service applications with Dropwizard/Spring Boot Experience with cloud services such as GCP and/or AWS. Experience with Infrastructure as Code tools such as Terraform. Experience in Linux environment. Experience working with technologies such as SQL, Kafka, Kafka Streams Experience with Docker Experience with SCM and CI/CD tools such as GIT and Bitbucket Experience with build tools such as Gradle or Maven Experience in writing Kubernetes deployment manifests and troubleshooting cluster and application-level issues. Physical Requirements: INVIDI is a conscious, clean, well-organized, and supportive office environment. Prolonged periods of sitting at a desk and working on a computer are normal. Note: Final candidates must successfully pass INVIDI s background screening requirements. Final candidates must be legally authorized to work in India. INVIDI has reopened its offices on a flexible hybrid model. Ready to join our team? Apply today!

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies