Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
You should have at least 10+ years of experience in the field, with a strong background in Kafka Streams / KSQL architecture and associated clustering model. Your expertise should include solid programming skills with Java, along with best practices in development, automation testing, and streaming APIs. Practical experience in scaling Kafka, KStreams, and Connector infrastructures is required, as well as the ability to optimize the Kafka ecosystem based on specific use-cases and workloads. As a developer, you should have hands-on experience in building producer and consumer applications using the Kafka API, and proficiency in implementing KStreams components. Additionally, you should have developed KStreams pipelines and deployed KStreams clusters. Experience in developing KSQL queries and understanding the best practices of using KSQL vs KStreams is essential. Strong knowledge of the Kafka Connect framework is necessary, including experience with various connector types such as HTTP REST proxy, JMS, File, SFTP, JDBC, Splunk, Salesforce, and the ability to support wire-format translations. Familiarity with connectors available from Confluent and the community, as well as hands-on experience in designing, writing, and operationalizing new Kafka Connectors using the framework is a plus. Knowledge of Schema Registry is also beneficial. Nice-to-have qualities include providing thought leadership for the team, excellent verbal and written communication skills, being a good team player, and willingness to go the extra mile to support the team. In terms of educational qualifications, a four-year college degree in Science, Engineering, Technology, Business, or Humanities is required. Candidates with a Master's degree and/or certifications in the relevant technologies are preferred. The working mode for this position is hybrid, full-time (3 days working from the office), and the notice period is a maximum of 30 days.,
Posted 3 days ago
12.0 - 14.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Global Technology Partners is a premier partner for digital transformation, with a diverse team of software engineering experts in the US and India. They combine strategic thinking, innovative design, and robust engineering to deliver exceptional results for their clients. Job Summary We are seeking a highly experienced and visionary Principal/Lead Java Architect to play a pivotal role in designing and evolving our next-generation, high-performance, and scalable event-driven platforms. This role demands deep expertise in Java, extensive experience with Kafka as a core component of event streaming architectures, and a proven track record of leading architectural design and implementation across complex enterprise systems. You will be instrumental in defining technical strategy, establishing best practices, and mentoring engineering teams to deliver robust and resilient solutions. Key Responsibilities: Architectural Leadership: Lead the design, development, and evolution of highly scalable, resilient, and performant event-driven architectures using Java and Kafka. Define architectural patterns, principles, and standards for event sourcing, CQRS, stream processing, and microservices integration with Kafka. Drive technical vision and strategy for our core platforms, ensuring alignment with business objectives and long-term technology roadmap. Conduct architectural reviews, identify technical debt, and propose solutions for continuous improvement. Stay abreast of emerging technologies and industry trends, evaluating their applicability and recommending adoption where appropriate. Design & Development: Design and implement robust, high-throughput Kafka topics, consumers, producers, and streams (Kafka Streams/KSQL). Architect and design Java-based microservices that effectively integrate with Kafka for event communication and data synchronization. Lead the selection and integration of appropriate technologies and frameworks for event processing, data serialization, and API development. Develop proof-of-concepts (POCs) and prototypes to validate architectural choices and demonstrate technical feasibility. Contribute hands-on to critical path development when necessary, demonstrating coding excellence and leading by example. Kafka Ecosystem Expertise: Deep understanding of Kafka internals, distributed systems concepts, and high-availability configurations. Experience with Kafka Connect for data integration, Schema Registry for data governance, and KSQL/Kafka Streams for real-time stream processing. Proficiency in monitoring, optimizing, and troubleshooting Kafka clusters and related applications. Knowledge of Kafka security best practices (authentication, authorization, encryption). Technical Governance & Mentorship: Establish and enforce architectural governance, ensuring adherence to design principles and coding standards. Mentor and guide engineering teams on best practices for event-driven architecture, Kafka usage, and Java development. Foster a culture of technical excellence, collaboration, and continuous learning within the engineering organization. Communicate complex technical concepts effectively to both technical and non-technical stakeholders. Performance, Scalability & Reliability: Design for high availability, fault tolerance, and disaster recovery. Define and implement strategies for performance optimization, monitoring, and alerting across the event-driven ecosystem. Ensure solutions are scalable to handle significant data volumes and transaction rates. Required Skills & Experience: 12+ years of progressive experience in software development, with at least 5+ years in an Architect role designing and implementing large-scale enterprise solutions. Expert-level proficiency in Java (Java 8+, Spring Boot, Spring Framework). Deep and extensive experience with Apache Kafka: Designing and implementing Kafka topics, producers, and consumers. Hands-on experience with Kafka Streams API or KSQL for real-time stream processing. Familiarity with Kafka Connect, Schema Registry, and Avro/Protobuf. Understanding of Kafka cluster operations, tuning, and monitoring. Strong understanding and practical experience with Event-Driven Architecture (EDA) principles and patterns: Event Sourcing, CQRS, Saga, Choreography vs. Orchestration. Extensive experience with Microservices architecture principles and patterns. Proficiency in designing RESTful APIs and asynchronous communication mechanisms. Experience with relational and NoSQL databases (e.g., PostgreSQL, MongoDB, Cassandra). Solid understanding of cloud platforms (AWS, Azure, GCP) and containerization technologies (Docker, Kubernetes). Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, Azure DevOps). Strong problem-solving skills, analytical thinking, and attention to detail. Excellent communication, presentation, and interpersonal skills. Show more Show less
Posted 3 days ago
5.0 - 10.0 years
10 - 20 Lacs
Bengaluru
Work from Office
Key Skills: Confluent Kafka, Kafka Connect, Schema Registry, Kafka Brokers, KSQL, KStreams, Java/J2EE, Troubleshooting, RCA, Production Support. Roles & Responsibilities: Design and develop Kafka Pipelines. Perform unit testing of the code and prepare test plans as required. Analyze, design, and develop programs in a development environment. Support applications and jobs in the production environment for issues or failures. Develop operational documents for applications, including DFD, ICD, HLD, etc. Troubleshoot production issues and provide solutions within defined SLA. Prepare RCA (Root Cause Analysis) document for production issues. Provide permanent fixes to production issues. Experience Requirement: 5-10 yeras of experience working with Confluent Kafka. Hands-on experience with Kafka Connect using Schema Registry. Strong knowledge of Kafka brokers and KSQL. Familiarity with Kafka Control Center, Zookeepers, and KStreams is good to have. Experience with Java/J2EE is a plus. Education: B.E., B.Tech.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
pune, maharashtra
On-site
You are a results-driven Data Project Manager (PM) responsible for leading data initiatives within a regulated banking environment, focusing on leveraging Databricks and Confluent Kafka. Your role involves overseeing the successful end-to-end delivery of complex data transformation projects aligned with business and regulatory requirements. In this position, you will be required to lead the planning, execution, and delivery of enterprise data projects using Databricks and Confluent. This includes developing detailed project plans, delivery roadmaps, and work breakdown structures, as well as ensuring resource allocation, budgeting, and adherence to timelines and quality standards. Collaboration with data engineers, architects, business analysts, and platform teams is essential to align on project goals. You will act as the primary liaison between business units, technology teams, and vendors, facilitating regular updates, steering committee meetings, and issue/risk escalations. Your technical oversight responsibilities include managing solution delivery on Databricks for data processing, ML pipelines, and analytics, as well as overseeing real-time data streaming pipelines via Confluent Kafka. Ensuring alignment with data governance, security, and regulatory frameworks such as GDPR, CBUAE, and BCBS 239 is crucial. Risk and compliance management are key aspects of your role, involving ensuring regulatory reporting data flows comply with local and international financial standards and managing controls and audit requirements in collaboration with Compliance and Risk teams. The required skills and experience for this role include 7+ years of Project Management experience within the banking or financial services sector, proven experience in leading data platform projects, a strong understanding of data architecture, pipelines, and streaming technologies, experience in managing cross-functional teams, and proficiency in Agile/Scrum and Waterfall methodologies. Technical exposure to Databricks (Delta Lake, MLflow, Spark), Confluent Kafka (Kafka Connect, kSQL, Schema Registry), Azure or AWS Cloud Platforms, integration tools, CI/CD pipelines, and Oracle ERP Implementation is expected. Preferred qualifications include PMP/Prince2/Scrum Master certification, familiarity with regulatory frameworks, and a strong understanding of data governance principles. The ideal candidate will hold a Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or a related field. Key performance indicators for this role include on-time, on-budget delivery of data initiatives, uptime and SLAs of data pipelines, user satisfaction, and compliance with regulatory milestones.,
Posted 2 weeks ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai
Work from Office
Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 month ago
5.0 - 8.0 years
22 - 30 Lacs
Noida, Hyderabad, Bengaluru
Hybrid
Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in
Posted 1 month ago
5 - 8 years
9 - 10 Lacs
Bengaluru
Work from Office
Experienced in Kafka cluster maintenance, HA/DR setup, SSL/SASL/LDAP auth, ACLs, Kafka components (ZK, Connect, Schema Registry, etc.), upgrades, monitoring, capacity planning, and DB optimization. Mail:kowsalya.k@srsinfoway.com
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough