Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonalds: One of the worlds largest employers with locations in more than 100 countries, McDonalds Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald&aposs global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Position Summary: Senior Manager, Integrated Test Lead Data Product Engineering & Delivery (Sr Manager, Technology Testing) Lead comprehensive testing strategy and execution for complex data engineering pipelines and product delivery initiatives. Drive quality assurance across integrated systems, data workflows, and customer-facing applications while coordinating cross-functional testing efforts. Who we are looking for: Primary Responsibilities: Test Strategy & Leadership: Design and implement end-to-end testing frameworks for data pipelines, ETL / ELT processes, and analytics platforms Ensure test coverage across ETL / ELT, data transformation, lineage and consumption layers Develop integrated testing strategies spanning multiple systems, APIs, and data sources Establish testing standards, methodologies, and best practices across the organization Data Engineering Testing: Create comprehensive test suites for data ingestion, transformation, and output validation Design data quality checks, schema validation, and performance testing for large-scale datasets Implement automated testing for streaming and batch data processing workflows Validate data integrity across multiple environments and systems and against business rules Cross-Functional Coordination: Collaborate with data engineers, software developers, product managers, and DevOps teams Coordinate testing activities across multiple product streams and release cycles Manage testing dependencies and critical path items in complex delivery timelines Quality Assurance & Process Improvement: Establish metrics and KPIs for testing effectiveness and product quality to drive continuous improvement in testing processes and tooling Lead root cause analysis for production issues and testing gaps Technical Leadership: Mentor junior QA engineers and promote testing best practices Evaluate and implement new testing tools and technologies Design scalable testing infrastructure and CI/CD integration Skill: 10+ years in software testing with 3+ years in leadership roles 8+ year experience testing data engineering systems, ETL pipelines, or analytics platforms Proven track record with complex, multi-system integration testing Experience in agile/scrum environments with rapid delivery cycles Strong SQL experience with major databases (Redshift, Bigquery, etc.) Experience with cloud platforms (AWS, GCP) and their data services Knowledge of data pipeline tools (Apache Airflow, Kafka, Confluent, Spark, dbt, etc.) Proficiency in data warehousing, data architecture, reporting and analytics applications Scripting languages (Python, Java, bash) for test automation API testing tools and methodologies CI/CD/CT tools and practices Strong project management and organizational skills Excellent verbal and written communication abilities Experience managing multiple priorities and competing deadlines Work location: Hyderabad, India Work pattern: Full time role. Work mode: Hybrid. Show more Show less
Posted 4 days ago
6.0 - 8.0 years
10 - 15 Lacs
Hyderabad, Gurugram, Bengaluru
Work from Office
Application Integration Engineer Experience Level (6-8 years) Skill Python, AWS S3, AWS MWAA Airflow, Confluent Kafka, API Development ? Experienced Python developer with very good experience with Confluent Kafka and Airflow. Have API development experience using Python. Have good experience with AWS cloud services. Very good experience with DevOps process CI/CD tools like Git, Jenkins, AWS ECR/ECS, AWS EKS etc. ? Requirements analysis of FR / NFR and prepares technical design based on requirements ? Builds code based on the technical design ? Can independently resolve technical issues and also help other team members with technical issue resolution. ? Helps with testing and efficiently fixes bugs. ?Follows the DevOps CI/CD processes and change management processes for any code deployment
Posted 4 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
In the modern banking age, financial institutions are required to bring Classical Data Drivers and Evolving Business Drivers together on a single platform. However, traditional data platforms face limitations in communicating with evolving business drivers due to technological constraints. A Modern Data Platform is essential to bridge this gap and elevate businesses to the next level through data-driven approaches, enabled by recent technology transformations. As a Technology leader with an academic background in Computer Science / Information Technology / Data Technologies [BE/BTech/MCA], you will have the opportunity to lead the Modern Data Platform Practice. This role involves providing solutions to customers on Traditional Datawarehouses across On-Prem and Cloud platforms. You will be responsible for architecting Data Platforms, defining Data engineering designs, selecting appropriate technologies and tools, and enhancing the organization's Modern Data Platform capabilities. Additionally, you will lead pre-sales discussions, provide technology architecture in RFP responses, and spearhead technology POC/MVP initiatives. To excel in this role, you are expected to possess the following qualifications and experiences: - 12-16 years of Data Engineering and analytics experience, including hands-on experience in Big Data systems across On-Prem and Cloud environments - Leadership in Data Platform architecture & design projects for mid to large size firms - Implementation experience with Batch Data and Streaming / Online data integrations using 3rd party tools and custom programs - Proficiency in SQL and one of the programming languages: Core Java / Scala / Python - Hands-on experience in Kafka for enabling Event-driven data pipes / processing - Knowledge of leading Data Services offered by AWS, Azure, Snowflake, Confluent - Strong understanding of distributed computing and related data structures - Implementation of Data Governance and Quality capabilities for Data Platforms - Analytical and presentation skills, along with the ability to build and lead teams - Exposure to leading RDBMS technologies and Data Visualization platforms - Demonstrated AI/ML models for Data Processing and generating Insights - Team player with the ability to work independently with minimal direction Your responsibilities at Oracle will be at Career Level - IC4, and the company values Diversity and Inclusion to foster innovation and excellence. Oracle offers a competitive suite of Employee Benefits emphasizing parity, consistency, and affordability, including Medical, Life Insurance, and Retirement Planning. The company encourages employees to contribute to the communities where they live and work. Oracle believes that innovation stems from diversity and inclusion, and is committed to creating a workforce where all individuals can thrive and contribute their best work. The company supports individuals with disabilities by providing reasonable accommodations throughout the job application, interview process, and in potential roles to ensure successful participation in crucial job functions. As a global leader in cloud solutions, Oracle is dedicated to leveraging tomorrow's technology to address today's challenges. The company values inclusivity and empowers its workforce to drive innovation and growth. Oracle careers offer opportunities for global engagement, work-life balance, and competitive benefits. The company is committed to promoting an inclusive workforce that supports opportunities for all individuals. If you require accessibility assistance or accommodation for a disability at any point during the employment process at Oracle, kindly reach out by emailing accommodation-request_mb@oracle.com or calling +1 888 404 2494 in the United States.,
Posted 1 week ago
8.0 - 13.0 years
10 - 20 Lacs
Bengaluru
Remote
Description: POSITION OVERVIEW : BA - AWS data Engineer Lead Work Address Remote Experience ( Relevant) 10+ Top 3 skills which is mandatory Kafka, AWS, Confluent Shift Timings Uk shift Start date ASAP Data Engineer Lead Robust hands-on experience with industry standard tooling and techniques, including SQL, Git and CI/CD pipelines mandiroty Management, administration, and maintenance with data streaming tools such as Kafka/Confluent Kafka, Flink Experienced with software support for applications written in Python & SQL Administration, configuration and maintenance of Snowflake & DBT Experience with data product environments that use tools such as Kafka Connect, Synk, Confluent Schema Registry, Atlan, IBM MQ, Sonarcube, Apache Airflow, Apache Iceberg, Dynamo DB, Terraform and GitHub Debugging issues, root cause analysis, and applying fixes Management and maintenance of ETL processes (bug fixing and batch job monitoring) Training & Certification • Apache Kafka Administration Snowflake Fundamentals/Advanced Training Experience 10 years of experience in a technical role working with AWS At least 2 years in a leadership or management role
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
coimbatore, tamil nadu
On-site
You are a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience, responsible for leading the design and implementation of scalable, cloud-native data solutions. Your role involves modernizing enterprise data infrastructure, driving real-time data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Your key responsibilities include architecting and implementing cloud-based data platforms using AWS services such as Redshift, Glue, DMS, and Data Lake solutions. You will lead the migration of Oracle E-Business Suite or similar ERP systems to AWS while ensuring data integrity and performance. Additionally, you will design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. It is essential for you to define and enforce data architecture standards, governance policies, and best practices. Collaborating with engineering, data, and business teams to align architecture with strategic goals is also a crucial aspect of your role. Furthermore, you will optimize data pipelines and storage for scalability, reliability, and cost-efficiency. To excel in this role, you must possess 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. Deep expertise in AWS services, including Redshift, Glue, DMS, and Data Lake architectures, is required. Proven experience with Confluent Kafka for real-time data streaming and event-driven architectures is essential. Hands-on experience in migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms is a must. Strong understanding of data governance, security, and compliance in cloud environments, as well as proficiency in designing scalable, fault-tolerant data systems, are also necessary. Preferred qualifications include experience with data modeling, metadata management, and lineage tracking, familiarity with infrastructure-as-code and CI/CD practices, and strong communication and leadership skills to guide cross-functional teams.,
Posted 1 week ago
13.0 - 17.0 years
40 - 55 Lacs
Hyderabad, Pune
Work from Office
Job Title: Engineering Manager (Java, Spring Boot, Microservices, Kafka) Location: Pune (On-site) Experience: 13+ years total, including 2+ years in people management We are hiring an Engineering Manager / Technical Manager for a leading product-based company in Pune & Hyd. This is a full-time, on-site role and an exciting opportunity for a seasoned engineering leader who thrives in a hands-on technical environment. Key Responsibilities: Lead and mentor a team of engineers, overseeing performance management, appraisals, and talent development. Drive the design and development of scalable backend systems using Java, Spring Boot, Microservices, and Confluent Kafka. Collaborate with cross-functional teams to ensure timely delivery and high-quality outcomes. Maintain a hands-on approach to coding, technical problem-solving, and code reviews. Requirements: 13+ years of total industry experience with strong backend engineering skills. At least 2 years of proven experience in engineering leadership or people management roles. Proficiency in Java, Spring Boot, Microservices architecture, and Kafka (preferably Confluent). Strong interpersonal and communication skills, with a passion for mentoring and growing engineering talent. Must be willing to work from the Pune office full-time.
Posted 2 weeks ago
5.0 - 10.0 years
20 - 25 Lacs
Chennai, Bengaluru
Hybrid
Role : Api Developer Node.Js What awaits you/ Job Profile Design, develop, and maintain RESTful APIs using Node.js. Develop serverless applications using AWS Lambda, API Gateway, Redis and other AWS services. Design and maintain infrastructure using AWS CloudFormation for Infrastructure-as-Code (IaC) or similar services from Azure/Google cloud. Ensure the application is scalable, performant, secure and highly available. Good analytical/problem solving skills, algorithms, logical thinking. API Testing, error resolutions, and overall software development process. Maintains and upgrades existing applications. Collaborate with the team members. What should you bring along Expert in programming using Node.Js with TypeScript. Exposure in AWS Cloud/Azure Cloud/Google Cloud Knowledge in Api Tools like Insomnia / Postman Must have technical skill: Strong proficiency with JavaScript and Node.js. AWS Lambda, Api Gateway, AWS Cloud watch or corresponding services from other cloud providers like Azure, Google cloud Implement streaming solutions using Confluent Kafka Docker for containerization and deployment. Understand Agile methodologies and hands-on experience using Jira, Confluence or similar agile project management tools. Excellent communication and collaboration skills. Domain background in Financial Services/banking. Good to have skills: Certifications in cloud computing platforms (e.g., Microsoft Certified: Azure Developer Associate, AWS Certified Developer - Associate). Kubernetes and Docker GitHub and Jenkins
Posted 2 weeks ago
4.0 - 8.0 years
5 - 15 Lacs
Pune
Hybrid
About the team and your role We are currently looking for integration consultants that are passionate about integration and understand what it takes to deliver TOP quality integration solutions to our clients and partners. You have an eagle-eye for identifying the integration challenges and the ability to translate those same business challenges into the best integration solutions. You can listen and stay calm under pressure and can be the linking pin between business and IT. You have seen integrations in different shapes, sizes, and colours, you can integrate any to any, either on-premise or cloud. You advise our clients about the best integration strategies based on best practices, analysts' recommendations, and architecture patterns. Last but not least, to be successful in this position we expect you to apply your strong consultancy & integration skills. Part of your responsibilities is to support the different developing teams during the entire lifecycle of an interface, from requirements gathering, analysis, design, development, testing, and handing over to operational support. We are looking for experienced Enterprise Integration Consultants to join our team. The ideal candidate has: Strong knowledge of integration principles and consultancy skills to be able to translate business to IT requirements. Hands-on experience in the integration of SAP and non-SAP systems in A2A and B2B scenarios. Deep expertise and hands-on experience in Dell Boomi as the primary integration platform. Working knowledge of API Management platforms (e.g., SAP API Management, Apigee, or others). Familiarity with event-driven architecture and distributed streaming platforms like Solace or Confluent Kafka . In-depth technical, functional, and architectural expertise in integrating applications using different technologies such as but not limited to REST, SOAP, ALE-IDocs, EDI, RFC, XI, HTTP, IDOC, JDBC, File/FTP, Mail, JMS. Solid middleware knowledge and web service skills. Good understanding of REST API and Web Services. Extensive experience with integrating 3rd party applications using REST-based services. Experience using tools like Postman & SOAPUI for service testing. Proven experience with full life cycle Integration implementation or rollout projects. Demonstrated experience with deliverables planning, client-facing roles, and high pace environments. What is Rojo all about? Founded in 2011, Rojo Integrations has transformed from a consulting firm into a comprehensive SAP integration leader, partnering with top software vendors like SAP, Coupa, SnapLogic, and Solace. As the leading SAP integration partner and ultimate expert, we provide seamless enterprise integration and data analytics solutions, enabling real-time insights and empowering digital transformation. Trusted by global Bluechip companies such as Heineken and Siemens, we deliver tailored services to meet unique business needs. Rojo is headquartered in the Netherlands and operates globally from its offices in the Netherlands, Spain, and India. We specialize in SAP integration modernization and business processes, improving data integration and business strategies. Our 360-degree portfolio includes consultancy, software development, and managed services to streamline integration, enhance observability, and drive growth. Requirements to succeed in this role Experience using Dell Boomi , SAP PO, SAP Cloud Integration, SnapLogic, and/or API Management. Quick Learner and adapt to the new tools and technologies and evaluate their test applicability. Team Player with good technical, analytical, communication skills and client-driven mindset. A bright mind and ability to understand a complex platform. Ability to understand technical/engineering concepts and to learn integration product functionality and applications. Demonstrated user-focused technical writing ability. Must be able to communicate complex technical concepts clearly and effectively. Strong analytical and problem-solving skills. Ability to work independently in a dynamic environment. Ability to work on multiple complex projects simultaneously. Strong interpersonal communication skills. Effectively communicates in one-to-one and group situations. At least three years of previous experience in a similar role. Additional desired skills: You have at least a Bachelors degree in computer engineering or a related field. Experience with any API Management Platform. Experience with Distributed Streaming Platforms and Event-based Integration Architecture such as Kafka or Solace. Extensive experience in integration of SAP and non-SAP systems in A2A and B2B scenarios using SAP Integration Suite or Cloud Integration (CPI). Experience in integration with main SAP backend systems (SAP ERP, SAP S/4HANA, SAP S/4HANA Cloud). SAP PO experience in programming UDFs, Modules, Look-Ups (RFC, SOAP, JDBC), BPM, Inbound and Outbound ABAP Proxies. Extensive knowledge of Java, JavaScript and/or GroovyScript. Good understanding CI/CD concepts. Speak and write English fluently. Affinity and experience with integration platforms/software like Dell Boomi, SAP Cloud Integration, or SnapLogic is desirable. What do we offer? The chance to gain work experience in a dynamic and inspiring environment and launch your career. Plenty growth opportunities while working in a high energy and fun environment. The opportunity to work on innovative projects with colleagues who are genuinely proud of their contribution. Training and mentoring to support your professional development with a yearly education budget. International atmosphere with Multicultural environments (+- 20 nationalities). A global, inclusive and diverse working climate within a world conscious organization. Plus, other exciting benefits specific to each region. Rojo is committed in achieving diversity & inclusion in terms of gender, caste, race, religion, nationality, ethnic origin, sexual orientation, disability, age, pregnancy, or other status. All qualified candidates are encouraged to apply. No one fits a job description perfectly, and there is no such thing as the perfect candidate. If you don't meet all the criteria, we'd still love to hear from you. Does that spark your interest? Apply now.
Posted 2 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Jaipur
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Kafka Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Minimum 15 years of full time educationKey Responsibilities :A- Strong Experience as Administrator/Platform Engineering for Kafka B- Expertise in Confluent Kafka Administration C- Experience in implementing Kafka on confluent cloudD- Hands-on experience with Kafka clusters hosted on Cloud and on-prem platformsE- Design, build, assemble, and configure application or technical architecture components using business requirementF-Plus to have AWS expertise and familiarity with CI/CD DevOps, in addition to skills in Spring Boot, Microservices, and Angular Technical Experience :A-Token based auth, OAUTH, Basic Auth, Keypair concept, Openssl libraryB-Manage Kafka Cluster in OnPrem and Cloud environmentC-Confluent Cloud backup and restore for data D-Kafka load balancing and auto scale on the basis of loadE-Confluent cloud Centre and KSQL knowledge must have Professional Attributes :A -Interpersonal skills along with the ability to work in a teamB Good presentation skills Qualification Minimum 15 years of full time education
Posted 2 weeks ago
7.0 - 10.0 years
20 - 30 Lacs
Hyderabad, Chennai
Work from Office
Role Expectations: Design & develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Qualifications: Bachelor's degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problem-solving, analytical, and communication skills. Ability to work independently and as part of a team
Posted 2 weeks ago
10.0 - 20.0 years
35 - 45 Lacs
Chennai
Hybrid
Role & responsibilities: Expert Platform Lead with 10+ years of expertise in Confluent Kafka administration, cloud-native infrastructure, and enterprise-scale streaming architecture. This role involves overseeing Kafka platform strategy, optimizing infrastructure through automation, ensuring cost-effective scalability, and working closely with cross-functional teams to enable high-performance data streaming solutions. The ideal candidate will drive innovation, establish best practices, and mentor teams to enhance platform reliability and efficiency. Preferred candidate profile: Expert knowledge of Kafka (Confluent), event-driven architectures, and high-scale distributed systems. Mastery of Terraform for infrastructure automation across AWS, Kubernetes, and cloud-native ecosystems. Strong proficiency in AWS services, networking principles, and security best practices. Advanced experience with CI/CD pipelines, version control (Git), and scripting (Bash, Python).
Posted 2 weeks ago
3.0 - 8.0 years
18 - 22 Lacs
Navi Mumbai, Mumbai (All Areas)
Work from Office
1 Education : B.E./B.Tech/MCA in Computer Science 2 Experience : Must have 7+ years relevant experience in the field of database Administration. 3 Mandatory Skills/Knowledge Candidate should be technically sound in multiple distribution like Cloudera, Confluent, open source Kafka. Candidate should be technically sound in Kafka, Zookeeper Candidate should well versed in capacity planning and performance tuning. Candidate should be expertise in implementation of security in ecosystem Hadoop Security ranger , Kerberos ,SSL Candidate should be expertise in dev ops tool like ansible, Nagios, shell scripting python , Jenkins, Ansible, Git, Maven to implement automation . Candidate should able to Monitor, Debug & RCA for any service failure. Knowledge of network infrastructure for eg. TCP/IP, DNS, Firewall, router, load balancer. Creative analytical and problem-solving skills Provide RCAs for critical & recurring incidents. Provide on-call service coverage within a larger group Good aptitude in multi-threading and concurrency concepts. 4 Preferred Skills/Knowledge Expert Knowledge of database administration and architecture Hands on Operating System Commands Kindly Share CVs on snehal.sankade@outworx.com
Posted 2 weeks ago
15.0 - 20.0 years
5 - 9 Lacs
Jaipur
Work from Office
Project Role : Application Developer Project Role Description : Design
Posted 2 weeks ago
10.0 - 15.0 years
25 - 40 Lacs
Noida
Remote
Job Summary: We are seeking a seasoned Confluent & Oracle EBS Cloud Engineer with over 10 years of experience to lead the design and implementation of scalable, cloud-native data solutions. This role focuses on modernizing enterprise data infrastructure, driving realtime data streaming initiatives, and migrating legacy ERP systems to AWS-based platforms. Key Responsibilities: • Architect and implement cloud-based data platforms using AWS services including Redshift, Glue, DMS, and Data Lake solutions. • Lead the migration of Oracle E-Business Suite or similar ERP systems to AWS, ensuring data integrity and performance. • Design and drive the implementation of Confluent Kafka for real-time data streaming across enterprise systems. • Define and enforce data architecture standards, governance policies, and best practices. • Collaborate with engineering, data, and business teams to align architecture with strategic goals. • Optimize data pipelines and storage for scalability, reliability, and cost-efficiency. Required Qualifications: • 10+ years of experience in data architecture, cloud engineering, or enterprise systems design. • Deep expertise in AWS services including Redshift, Glue, DMS, and Data Lake architectures. • Proven experience with Confluent Kafka for real-time data streaming and eventdriven architectures. • Hands-on experience migrating large-scale ERP systems (e.g., Oracle EBS) to cloud platforms. • Strong understanding of data governance, security, and compliance in cloud environments. • Proficiency in designing scalable, fault-tolerant data systems. Preferred Qualifications: • Experience with data modeling, metadata management, and lineage tracking. • Familiarity with infrastructure-as-code and CI/CD practices. • Strong communication and leadership skills to guide cross-functional teams
Posted 3 weeks ago
7.0 - 12.0 years
12 - 18 Lacs
Pune, Chennai
Work from Office
Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solute zions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
5.0 - 10.0 years
12 - 18 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Design and develop Kafka Pipelines. Perform Unit testing of the code and prepare test plans as required. Analyze, design and develop programs in development environment. Support application & jobs in production environment for abends or issues.
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Dubai, Pune, Chennai
Hybrid
Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .
Posted 1 month ago
9.0 - 14.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Kafka Data Engineer Data Engineer to build and manage data pipelines that support batch and streaming data solutions. The role requires expertise in creating seamless data flows across platforms like Data Lake/Lakehouse in Cloudera, Azure Databricks, Kafka for both batch and stream data pipelines etc. Responsibilities Strong experience in develop, test, and maintain data pipelines (batch & stream) using Cloudera, Spark, Kafka and Azure services like ADF, Cosmos DB, Databricks, NoSQL DB/ Mongo DB etc. Strong programming skills in spark, python or scala & SQL. Optimize data pipelines to improve speed, performance, and reliability, ensuring that data is available for data consumers as required. Create ETL pipelines for downstream consumers by transform data as per business logic. Work closely with Data Architects and Data Analysts to align data solutions with business needs and ensure the accuracy and accessibility of data. Implement data validation checks and error handling processes to maintain high data quality and consistency across data pipelines. Strong analytical and problem solving skills, with a focus on optimizing data flows and addressing impacts in the data pipeline. Qualifications 8+ years of IT experience with at least 5+ years in data engineering and cloud-based data platforms. Strong experience with Cloudera/any Data Lake, Confluent/Apache Kafka, and Azure Data Services (ADF, Databricks, Cosmos DB). Deep knowledge of NoSQL databases (Cosmos DB, MongoDB) and data modeling for performance and scalability. Proven expertise in designing and implementing batch and streaming data pipelines using Databricks, Spark, or Kafka. Experience in creating scalable, reliable, and high-performance data solutions with robust data governance policies. Strong collaboration skills to work with stakeholders, mentor junior Data Engineers, and translate business needs into actionable solutions. Bachelors or masters degree in computer science, IT, or a related field.
Posted 1 month ago
10.0 - 15.0 years
25 - 32 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Skill Set Confluent + Kafka Architect Hiring Location Bangalore/Chennai/Pune/Mumbai/Hyderabad Notice Period Immediate Joiners- Max 30 Days Experience 10-12 Yrs Interview Levels 2 Internal/1 CI Shift Timings 2 PM- 11 PM (Sweden Timing) Job description Confluent Kafka Architect P3 its for CHina IEB support The team member will be part of a Kafka platform development team responsible for architecture administration and operations for Kafka clusters across all nonproduction and production environments The role involves best design solution ensuring system reliability optimizing performance automating regular tasks providing guidance to team for onboarding applications helping team for upskillcrosstraining and troubleshooting issues to maintain seamless Kafka operations Key Responsibilities Architect Manage and maintain Kafka clusters in nonprod and prod environments Responsible for high level discussions with customers Responsible for proposing best solutions as per industry standards Responsible for doing POCs and documenting the exercise Getting involved with team members to solve their technical problems Handling high priority discussions and drive the meetings with vendor if required Skills and Abilities Strong knowledge of Kafka architecture Handson experience with Kafka cluster setup using ZookeeperKraft Proficiency in working with Kafka connectors and ksqlDB Experience with Kubernetes for Kafka deployment and management Ability to create Docker images and work with containerized environments Proficiency in writing YAML configurations Strong troubleshooting and debugging skills Experience with monitoring tools like Datadog
Posted 1 month ago
5.0 - 10.0 years
30 - 35 Lacs
Chennai, Bengaluru
Work from Office
Data Engineer: Experienced Kstream + Ksql dev with in-depth knowledge of specific client systems TAHI Contract and Application, ISP Contract and Application modules. Performs data analysis and writes code to implement functional requirements per LLD and client processes. Minimum skills levels in this specific area Current roles are 5 + years plus Insurnace domain experience These are technical roles, and the prime requirement is for Kstream/ Java/ KSLQDB/ Kafka
Posted 1 month ago
5.0 - 10.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 5+ Yrs exp in Kafka Administrator Kafka Administrator Required Skills & Experience: Hands-on experience in Kafka Cluster Management Proficiency with Kafka Connect Knowledge of Cluster Linking and MirrorMaker Experience setting up Kafka clusters from scratch Experience on Terraform/Ansible script Ability to install and configure the Confluent Platform Understanding of rebalancing, Schema Registry, and REST Proxies Familiarity with RBAC (Role-Based Access Control) and ACLs (Access Control Lists) Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 1 month ago
4.0 - 9.0 years
5 - 13 Lacs
Thane, Goregaon, Mumbai (All Areas)
Work from Office
Opening for Leading Insurance company. **Looking for Immediate Joiner and 30 Days** Key Responsibilities: Kafka Infrastructure Management: Design, implement, and manage Kafka clusters to ensure high availability, scalability, and security. Monitor and maintain Kafka infrastructure, including topics, partitions, brokers, Zookeeper, and related components. Perform capacity planning and scaling of Kafka clusters based on application needs and growth. Data Pipeline Development: Develop and optimize Kafka data pipelines to support real-time data streaming and processing. Collaborate with internal application development and data engineers to integrate Kafka with various HDFC Life data sources. Implement and maintain schema registry and serialization/deserialization protocols (e.g., Avro, Protobuf). Security and Compliance: Implement security best practices for Kafka clusters, including encryption, access control, and authentication mechanisms (e.g., Kerberos, SSL). Documentation and Support: Create and maintain documentation for Kafka setup, configurations, and operational procedures. Collaboration: Provide technical support and guidance to application development teams regarding Kafka usage and best practices. Collaborate with stakeholders to ensure alignment with business objectives. Interested candidates shared resume on snehal@topgearconsultants.com
Posted 1 month ago
8.0 - 13.0 years
5 - 12 Lacs
Hyderabad, Chennai, Bengaluru
Work from Office
Role & responsibilities Looking exp in 8+ Yrs exp in Kafka Administrator Mandatory Skill: kSQL DB Developers who must have hands on experience in writing the Ksql queries. Kafka Connect development experience. Kafka Client Stream Applications Developer Confluent Terraform Provider Skill: 8+ years of experience in Development project and Support project experience 3+ years of hands on experience in Kafka Understanding Event Streaming patterns and when to apply these patterns Designing building and operating in-production Big Data, stream processing, and/or enterprise data integration solutions using Apache Kafka Working with different database solutions for data extraction, updates and insertions. Identity and Access Management space including relevant protocols and standards such as OAuth, OIDC, SAML, LDAP etc. Knowledge of networking protocols such as TCP, HTTP/2, WebSockets etc. Candidate must work in Australia timings [AWST]., Interview mode will be Face to Face Interested candidate share me your updated resume in recruiter.wtr26@walkingtree.in
Posted 1 month ago
3.0 - 7.0 years
6 - 15 Lacs
Hyderabad, Chennai
Hybrid
About Volante Volante Technologies is a global fintech leader, delivering a cloud-native Payments Processing Platform to banks and financial institutions worldwide. Join our fast-paced, innovation-driven team and help shape the future of payments technology. Who Were Looking For We are seeking an experienced Middleware Administrator with expertise in Kafka and RabbitMQ to manage, scale, and optimize messaging infrastructure in both on-prem and cloud environments. Qualification B.E./B.Tech in Computer Science from a reputed institute Key Responsibilities Deploy and manage Kafka clusters (on-prem & cloud-native) Administer RabbitMQ on UNIX/Cloud environments Develop and manage Kafka connectors (JDBC, JMS, MQ, Elasticsearch, etc.) Monitor performance, ensure high availability, and troubleshoot issues Configure security (Kerberos, SSL), access lists, and cluster setups Automate deployments using Docker, Jenkins, GitLab Work closely with cross-functional teams (Dev, QA) Participate in design reviews, capacity planning, and performance tuning. Must-Have Skills 3+ years of hands-on experience with Kafka and RabbitMQ Kafka architecture: brokers, zookeepers, schema registry, KSQL, REST Proxy Strong knowledge of messaging middleware (JMS, STOMP, AMQP) Kafka REST Proxy and stream processing platforms (e.g., AWS Kinesis, GCP Pub/Sub) Linux (preferably RHEL), scripting (Python/Java), and automation (Ansible) Shift flexibility & strong communication skills Good-to-Have Skills Kubernetes & in-memory data systems Exposure to cloud environments (AWS MQ) Custom Kafka connector development Familiarity with data ingestion and extraction techniques
Posted 1 month ago
2.0 - 7.0 years
4 - 9 Lacs
Bengaluru
Work from Office
We offer joint operations and digitalization services for Global Business Services and work closely alongside the entire Shared Services organization. We make efficient use of the possibilities of new technologies such as Business Process Management (BPM) and Robotics as enablers for efficient and effective implementations. We are looking for Data Engineer (AWS, Confluent & Snaplogic ) Data Integration : Integrate data from various Siemens organizations into our data factory, ensuring seamless data flow and real-time data fetching. Data Processing : Implement and manage large-scale data processing solutions using AWS Glue, ensuring efficient and reliable data transformation and loading. Data Storage : Store and manage data in a large-scale data lake, utilizing Iceberg tables in Snowflake for optimized data storage and retrieval. Data Transformation : Apply various data transformations to prepare data for analysis and reporting, ensuring data quality and consistency. Data Products : Create and maintain data products that meet the needs of various stakeholders, providing actionable insights and supporting data-driven decision-making. Workflow Management : Use Apache Airflow to orchestrate and automate data workflows, ensuring timely and accurate data processing. Real-time Data Streaming : Utilize Confluent Kafka for real-time data streaming, ensuring low-latency data integration and processing. ETL Processes : Design and implement ETL processes using SnapLogic , ensuring efficient data extraction, transformation, and loading. Monitoring and Logging : Use Splunk for monitoring and logging data processes, ensuring system reliability and performance. Youd describe yourself as: Experience : 3+ relevant years of experience in data engineering, with a focus on AWS Glue, Iceberg tables, Confluent Kafka, SnapLogic, and Airflow. Technical Skills : Proficiency in AWS services, particularly AWS Glue. Experience with Iceberg tables and Snowflake. Knowledge of Confluent Kafka for real-time data streaming. Familiarity with SnapLogic for ETL processes. Experience with Apache Airflow for workflow management. Understanding of Splunk for monitoring and logging. Programming Skills : Proficiency in Python, SQL, and other relevant programming languages. Data Modeling : Experience with data modeling and database design. Problem-Solving : Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve data-related issues. Preferred Qualities: Attention to Detail : Meticulous attention to detail, ensuring data accuracy and quality. Communication Skills : Excellent communication skills, with the ability to collaborate effectively with cross-functional teams. Adaptability : Ability to adapt to changing technologies and work in a fast-paced environment. Team Player : Strong team player with a collaborative mindset. Continuous Learning : Eagerness to learn and stay updated with the latest trends and technologies in data engineering.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough