Jobs
Interviews

32 Stream Processing Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

14 - 18 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

Job Summary: We are seeking a highly skilled and experienced Scala Developer with strong hands-on expertise in functional programming , RESTful API development , and building scalable microservices . The ideal candidate will have experience with Play Framework , Akka , or Lagom , and be comfortable working with both SQL and NoSQL databases in cloud-native, Agile environments. Key Responsibilities: Design, develop, and deploy scalable backend services and APIs using Scala . Build and maintain microservices using Play Framework , Akka , or Lagom . Develop RESTful APIs and integrate with internal/external services. Handle asynchronous programming , stream processing , and ensure efficient concurrency. Optimize and refactor code for better performance, readability, and scalability. Collaborate with cross-functional teams including Product, UI/UX, DevOps, and QA. Work with databases such as PostgreSQL , MySQL , Cassandra , or MongoDB . Participate in code reviews , documentation, and mentoring team members. Build and manage CI/CD pipelines using Docker , Git , and relevant DevOps tools. Follow Agile/Scrum practices and contribute to sprint planning and retrospectives. Must-Have Skills: Strong expertise in Scala and functional programming principles. Experience with Play Framework , Akka , or Lagom . Deep understanding of RESTful APIs , Microservices Architecture , and API integration . Proficiency with concurrency , asynchronous programming , and stream processing . Hands-on experience with SQL/NoSQL databases (PostgreSQL, MySQL, Cassandra, MongoDB). Familiarity with SBT or Maven as build tools. Experience with Git , Docker , and CI/CD workflows. Comfortable working in Agile/Scrum environments. Good to Have: Experience with data processing frameworks like Apache Spark . Exposure to cloud environments (AWS, GCP, or Azure). Strong debugging, troubleshooting, and analytical skills. Educational Qualification: Bachelors degree in Computer Science, Engineering, or a related field. Why Join Us? Opportunity to work on modern, high-impact backend systems. Collaborative and learning-driven environment. Be a part of a growing technology team building solutions at scale.

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Databricks Developer to join their team in Bangalore, Karnataka, India. As a Databricks Developer, your responsibilities will include pushing data domains into a massive repository and building a large data lake by highly leveraging Databricks. To be considered for this role, you should have at least 3 years of experience in a Data Engineer or Software Engineer role. An undergraduate degree in Computer Science, Statistics, Informatics, Information Systems, or another quantitative field is required, while a graduate degree is preferred. You should also have experience with data pipeline and workflow management tools, advanced working SQL knowledge, and familiarity with relational databases. Additionally, an understanding of Datawarehouse (DWH) systems, ELT and ETL patterns, data models, and transforming data into various models is essential. You should be able to build processes supporting data transformation, data structures, metadata, dependency and workload management. Experience with message queuing, stream processing, and highly scalable big data data stores is also necessary. Preferred qualifications include experience with Azure cloud services such as ADLS, ADF, ADLA, and AAS. The role also requires a minimum of 2 years of experience in relevant skills. NTT DATA is a trusted global innovator of business and technology services with a commitment to helping clients innovate, optimize, and transform for long-term success. They serve 75% of the Fortune Global 100 and have a diverse team of experts in more than 50 countries. As a Global Top Employer, NTT DATA offers services in business and technology consulting, data and artificial intelligence, industry solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. They are known for providing digital and AI infrastructure solutions and are part of the NTT Group, investing over $3.6 billion each year in R&D to support organizations and society in moving confidently into the digital future. Visit their website at us.nttdata.com for more information.,

Posted 4 days ago

Apply

3.0 - 9.0 years

18 - 22 Lacs

Hyderabad

Work from Office

At Skillsoft, we are all about making work matter . We believe every team member has the potential to be AMAZING . We are bold, sharp, driven and most of all, true . Join us in our quest to democratize learning and help individuals unleash their edge. OVERVIEW : To succeed in this challenging journey, we have set up multiple co-located teams across the globe (Hyderabad, US, Europe), embracing the scaled agile framework, a Micro Services approach combined with the DevOps model. We have passionate engineers working full time on this new platform in Hyderabad and it’s only the beginning. You will get a chance to work with brilliant people and some of the best development and design teams, in addition to working with cutting edge technologies such as React, Java/Node JS, Docker, Kubernetes, AWS. We are looking for exceptional Java/Node based full stack developers to join our team You will work alongside the Architect and DevOps teams to fully form an autonomous development squad and be in-charge of a part of the product. OPPORTUNITY HIGHLIGHTS: Technical leadership: As a Principal Software Engineer, you will be responsible for technical leadership, providing guidance and mentoring to other team members, and ensuring that projects are completed on time and to the highest standards. Cutting-edge technology : Skillsoft is a technology-driven company that is constantly exploring new technologies to enhance the learning experience for their customers. As a Principal Software Engineer, you will have the opportunity to work with cutting-edge technology and help drive innovation. Agile environment : Skillsoft follows agile methodologies, which means that you will be part of a fast-paced, collaborative environment where you will have the opportunity to work on multiple projects simultaneously. Career growth: Skillsoft is committed to helping their employees grow their careers. As a Principal Software Engineer, you will have access to a wide range of learning and development opportunities, including training programs, conferences, and mentorship. Impactful work: Skillsoft's mission is to empower people through learning, and as a Principal Software Engineer, you will be a key contributor to achieving this mission. You will have the opportunity to work on products and features that have a significant impact on the learning and development of individuals and organizations worldwide. Overall, the Principal Software Engineer role at Skillsoft offers a challenging and rewarding opportunity for individuals who are passionate about technology, learning, and making a difference in the world. SKILLS & QUALIFICATIONS: Minimum 9+ years of software engineering development experience developing cloud-based enterprise solutions. Proficient in programming languages (Java, JavaScript, HTML5, CSS) Proficient in JavaScript frameworks (Node.js, React, Redux, Angular, Express.js) Proficient with frameworks (Spring Boot, Stream processing) Strong knowledge working with REST API, Web services and SAML integrations Proficient in working with databases preferably Postgres. Experienced working on DevOps tools (Docker, Kubernetes, Ansible, AWS) Experience with code versioning tools, preferably Git ( Github , Gitlab, etc ) and the feature branch workflow Working Experience on Kafka, RabbitMq (messaging queue systems) Sound knowledge on design principles and design patterns Strong problem solving and analytical skills and understanding of various data structures and algorithms. Must know how to code applications on Unix/Linux based systems. Experience with build automation tools like Maven, Gradle, NPM, WebPack , Grunt . Sound troubleshooting skills to address code bugs, performance issues and environment issues that may arise. Good understanding of the common security concerns of high volume publicly exposed systems Experience in working with Agile/Scrum environment. Strong analytical skills and the ability to understand complexities and how components connect and relate to each other OUR VALUES WE ARE PASSIONATELY COMMITTED TO LEADERSHIP, LEARNING, AND SUCCESS. WE EMBRACE EVERY OPPORTUNITY TO SERVE OUR CUSTOMERS AND EACH OTHER AS: ONE TEAM OPEN AND RESPECTFUL CURIOUS READY TRUE

Posted 6 days ago

Apply

6.0 - 10.0 years

12 - 15 Lacs

Bengaluru

Work from Office

We are looking for an experienced KAM to manage and streamline SC Ops for key clients. Role involve leading large SAP consultant team, ensuring seamless delivery, managing client expectations supporting business growth through new project initiatives

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

maharashtra

On-site

As a Scala Developer at our company, you will play a crucial role in designing, building, and enhancing our clients" online platform to ensure optimal performance and reliability. Your responsibilities will include researching, proposing, and implementing cutting-edge technology solutions while adhering to industry best practices and standards. You will be accountable for the resilience and availability of various products and will collaborate closely with a diverse team to achieve collective goals. To excel in this role, we are seeking a highly skilled Scala Developer with over 7 years of experience in crafting scalable and high-performance backend systems. Your expertise in functional programming, familiarity with contemporary data processing frameworks, and proficiency in working within cloud-native environments will be invaluable. You will be tasked with designing, creating, and managing backend services and APIs using Scala, optimizing existing codebases for enhanced performance, scalability, and reliability, and ensuring the development of clean, maintainable, and well-documented code. Collaboration is key in our team, and you will work closely with product managers, frontend developers, and QA engineers to deliver exceptional results. Your role will also involve conducting code reviews, sharing knowledge, and mentoring junior developers to foster a culture of continuous improvement. Experience with technologies such as Akka, Play Framework, and Kafka, as well as integration with SQL/NoSQL databases and external APIs, will be essential in driving our projects forward. Your hands-on experience with Scala and functional programming principles, coupled with your proficiency in RESTful APIs, microservices architecture, and API integration, will be critical in meeting the demands of the role. A solid grasp of concurrency, asynchronous programming, and stream processing, along with familiarity with SQL/NoSQL databases and tools like SBT or Maven, will further enhance your contributions to our team. Exposure to Git, Docker, and CI/CD pipelines, as well as a comfort level in Agile/Scrum environments, will be advantageous. Moreover, your familiarity with Apache Spark, Kafka, or other big data tools, along with experience in cloud platforms like AWS, GCP, or Azure, and an understanding of DevOps practices, will position you as a valuable asset in our organization. Proficiency in testing frameworks such as ScalaTest, Specs2, or Mockito will round out your skill set and enable you to deliver high-quality solutions effectively. In return, we offer a stimulating and innovative work environment where you will have ample opportunities for learning and professional growth. Join us in shaping the future of our clients" online platform and making a tangible impact in the digital realm.,

Posted 1 week ago

Apply

12.0 - 16.0 years

0 Lacs

haryana

On-site

You will be joining a renowned global digital engineering firm as a Senior Solution Architect, reporting to the Director Consulting. Your key responsibility will be to craft innovative solutions for both new and existing clients, with a primary focus on leveraging data to fuel the architecture and strategy of Digital Experience Platforms (DXP). Your expertise will guide the development of solutions heavily anchored in CMS, CDP, CRM, loyalty, and analytics-intensive platforms, integrating ML and AI capabilities. The essence of your approach will be centered around leveraging data to create composable, insightful, and effective DXP solutions. In this role, you will be client-facing, sitting face to face with prospective customers to shape technical and commercially viable solutions. You will also lead by example as a mentor, challenge others to push their boundaries, and strive to improve your skillset in the ever-evolving landscape of Omnichannel solutions. Collaboration with cross-functional teams will be a key aspect of your daily work, as you strategize, problem-solve, and communicate effectively with internal and external team members. Your mastery of written language will allow you to deliver compelling technical proposals to both new and existing clients. Your day-to-day responsibilities will include discussing technical solutions with clients, contributing to digital transformation strategies, collaborating with various teams to shape solutions based on client needs, constructing technical architectures, articulating transitions from current to future states, sharing knowledge and thought leadership within the organization, participating in discovery of technical project requirements, and estimating project delivery efforts based on your recommendations. The ideal candidate for this position will possess 12+ years of experience in design, development, and support of large-scale web applications, along with specific experience in cloud-native technologies, data architectures, customer-facing applications, client-facing technology consulting roles, and commerce platforms. A Bachelor's degree in a relevant field is required. In addition to fulfilling work, Material offers a high-impact work environment with a strong company culture and benefits. As a global company working with best-of-class brands worldwide, Material values inclusion, interconnectedness, and amplifying impact through people, perspectives, and expertise. The company focuses on learning and making an impact, creating experiences that matter, new value, and making a difference in people's lives. Material offers professional development, mentorship, a hybrid work mode, health and family insurance, leaves, wellness programs, and counseling sessions.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

In this role, you will contribute to the development of backend databases and frontend services as part of an Intelligent Asset Management team. You will be responsible for building cyber-secure, efficient applications that support IoT and Smart Data initiatives. This includes designing, developing, testing, and implementing APIs, microservices, and edge libraries for system communication interfaces. Key Responsibilities: - Design, develop, test, and implement APIs and microservices based on defined requirements. - Build secure and scalable web applications and backend services. - Develop edge libraries for system communication interfaces. - Collaborate within a scrum-style team to deliver high-quality software solutions. - Ensure integration with existing IT infrastructure, mainframe systems, and cloud services. - Maintain and optimize relational databases and data pipelines. - Participate in performance assessments and contribute to continuous improvement. Primary Skills: - Programming Languages: Proficient in at least three of the following: Go, Java, Angular, PostgreSQL, Kafka, Docker, Kubernetes, S3 programming. - API Development: RESTful API design and implementation. - Microservices Architecture: Experience in building and deploying microservices. - Containerization & Orchestration: Docker, Kubernetes. - Database Management: PostgreSQL, RDBMS, data querying and processing. - Stream Processing: Apache Kafka or similar technologies. Secondary Skills: - Web Development: Angular or other modern web frameworks. - System Integration: Experience with mainframe operations, ICL VME, and system interfaces. - IT Infrastructure & Virtualization: Understanding of cloud platforms, virtualization, and IT support systems. - Software Development Lifecycle: Agile methodologies, scrum practices. - Security & Compliance: Cybersecurity principles in application development. - Reporting & Data Management: Experience with data storage, reporting tools, and performance monitoring. Preferred Qualifications: - Bachelors or Masters degree in Computer Science, Information Technology, or related field. - Experience in IoT, Smart Data, or Asset Management domains. - Familiarity with mainframe systems and enterprise integration. - Certifications in relevant technologies (e.g., Kubernetes, Java, Go, Angular).,

Posted 1 week ago

Apply

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you a skilled Data Architect with a passion for tackling intricate data challenges from various structured and unstructured sources Do you excel in crafting micro data lakes and spearheading data strategies at an enterprise level If this sounds like you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specifically catered to the lending domain. Your tasks will include defining and executing enterprise data strategies encompassing modeling, lineage, and governance. You will play a crucial role in architecting robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from diverse sources like APIs, PDFs, logs, and databases. Furthermore, you will be instrumental in establishing best practices related to data quality, metadata management, and data lifecycle control. Your hands-on involvement in implementing processes, strategies, and tools will be pivotal in creating innovative products. Collaboration with engineering and product teams to align data architecture with overarching business objectives will be a key aspect of your role. To excel in this position, you should bring to the table over 10 years of experience in data architecture and engineering. A deep understanding of both structured and unstructured data ecosystems is essential, along with practical experience in ETL, ELT, stream processing, querying, and data modeling. Proficiency in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python is a must. Additionally, expertise in cloud-native data platforms like AWS, Azure, or GCP is highly desirable, along with a solid foundation in data governance, privacy, and compliance standards. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. This role offers you the chance to lead data-first transformation, develop products that drive AI adoption, and the autonomy to design, build, and scale modern data architecture. You will be part of a forward-thinking, collaborative, and tech-driven culture with access to cutting-edge tools and technologies in the data ecosystem. If you are ready to shape the future of data with us, we encourage you to apply for this exciting opportunity based in Chennai. Join us in redefining data architecture and driving innovation in the realm of structured and unstructured data sources.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

The Senior Full Stack Software Engineer role entails responsibility for software development, maintenance, monitoring, problem resolution of both front- and back-end systems development solutions within .NET, Relativity or other eDiscovery tools. This position involves participation in projects across all SDLC lifecycles, from project inception to maintenance phase, focusing on analyzing, writing, building, and deploying software solutions of high quality. You will be accountable for creating and maintaining moderate to highly complex solutions addressing the informational and analytical needs of various groups, including data infrastructure, reporting, and applications. Your responsibilities will encompass all project lifecycle phases, such as requirements definition, solution design, application development, and system testing. You are expected to analyze end user data needs and develop user-oriented solutions that interface with existing applications. Documentation maintenance for work processes and procedures, making improvement suggestions, adhering to approved work changes, and providing backup support for projects are part of the role. Effective interaction and partnership across internal business teams, team planning, growth strategy assistance, InfoSec compliance execution, participation in system upgrades, and training on business functionality for system end users are also integral. You will work with minimal supervision, making a range of established decisions, escalating to the Manager when necessary, and providing regular updates. Adaptability, quick learning, and a big picture approach in project work are key attributes expected from you. Minimum Education Requirements: - Bachelor of Science in Computer Science or related field, or comparable business/technical experience. Minimum Experience Requirements: - At least 7-10 years of application development experience encompassing programming, data management, collection, modeling, and interpretation across complex data sets. - Proficiency in front-end technologies such as JavaScript, CSS3, and HTML5, and familiarity with third-party libraries like React Js, Angular, jQuery, and LESS. - Knowledge of server-side programming languages like .Net, Java, Ruby, or Python. - Familiarity with DBMS technology including SQLServer, Oracle, MongoDB, MySQL, and caching mechanisms like Redis, Memcached, and Varnish. - Ability to design, develop, and deploy full-stack web applications using both SQL and NoSQL databases, coach junior developers in the same, rapidly learn new tools, languages, and frameworks, and work with Enterprise Integration Patterns, SOA, Microservices, Stream processing, Event-Driven Architecture, Messaging Protocols, and Data Engineering. - Comfort with software development lifecycle, testing strategies, and working independently or as part of a team. Technical Skills: - Proficient in HTML5, CSS3, JavaScript (ES6+), modern web frontend frameworks, state management libraries, server-side languages, RESTful API design/development, database design/management, caching mechanisms, authentication, and authorization mechanisms like OAuth 2.0 and JWT, Microsoft Windows Server infrastructure, distributed systems, version control systems, CI/CD pipelines, and containerization technologies like Docker and Kubernetes. Consilio's True North Values: - Excellence: Making every client an advocate - Passion: Doing because caring - Collaboration: Winning through teamwork and communication - Agility: Flexing, adapting, and embracing change - People: Valuing, respecting, and investing in teammates - Vision: Creating clarity of purpose and a clear path forward,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

15 - 20 Lacs

Bengaluru, Mumbai (All Areas)

Work from Office

We’re hiring a Scala Developer with 4+ years of experience in building scalable, high-performance backend systems. Strong in functional programming, backend services, distributed systems, and cloud environments.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be responsible for building systems and APIs to collect, curate, and analyze data generated by biomedical dogs, devices, and patient data. Your immediate requirements will include developing APIs and backends to handle Electronic Health Record (EHR) data, time-series sensor streams, and sensor/hardware integrations via REST APIs. Additionally, you will work on data pipelines and analytics for physiological, behavioral, and neural signals, as well as machine learning and statistical models for biomedical and detection dog research. You will also be involved in web and embedded integrations connecting software to real-world devices. To excel in this role, you should have familiarity with domains such as signal processing, basic statistics, stream processing, online algorithms, databases (especially time series databases like victoriametrics, SQL including postgres, sqlite, duckdb), computer vision, and machine learning. Proficiency in Python, C++, or Rust is essential, as the stack primarily consists of Python with some modules in Rust/C++ where necessary. Firmware development is done in C/C++ (or Rust), and if you choose to work with C++/Rust, you may need to create a Python API using pybind11/PyO3. Your responsibilities will involve developing data pipelines for real-time and batch processing, as well as building robust APIs and backends for devices, research tools, and data systems. You will handle data transformations, storage, and querying for structured and time-series datasets, evaluate and enhance ML models and analytics, and collaborate with hardware and research teams to derive insights from messy real-world data. The focus will be on ensuring data integrity and correctness rather than brute-force scaling. If you enjoy creating reliable software and working with complex real-world data, we look forward to discussing this opportunity with you. Key Skills: backend development, computer vision, data transformations, databases, analytics, data querying, C, Python, C++, signal processing, data storage, statistical models, API development, Rust, data pipelines, firmware development, stream processing, machine learning,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

Are you a hands-on Data Architect who excels at tackling intricate data challenges within structured and unstructured sources Are you passionate about crafting micro data lakes and spearheading enterprise-wide data strategies If this resonates with you, we are eager to learn more about your expertise. In this role, you will be responsible for designing and constructing tailored micro data lakes specific to the lending domain. Additionally, you will play a key role in defining and executing enterprise data strategies encompassing modeling, lineage, and governance. Your tasks will involve architecting and implementing robust data pipelines for both batch and real-time data ingestion, as well as devising strategies for extracting, transforming, and storing data from various sources such as APIs, PDFs, logs, and databases. Establishing best practices for data quality, metadata management, and data lifecycle control will also be part of your core responsibilities. Collaboration with engineering and product teams to align data architecture with business objectives will be crucial, as well as evaluating and integrating modern data platforms and tools like Databricks, Spark, Kafka, Snowflake, AWS, GCP, and Azure. Furthermore, you will mentor data engineers and promote engineering excellence in data practices. The ideal candidate for this role should possess a minimum of 10 years of experience in data architecture and engineering, along with a profound understanding of structured and unstructured data ecosystems. Hands-on proficiency in ETL, ELT, stream processing, querying, and data modeling is essential, as well as expertise in tools and languages such as Spark, Kafka, Airflow, SQL, Amundsen, Glue Catalog, and Python. Familiarity with cloud-native data platforms like AWS, Azure, or GCP is required, alongside a solid foundation in data governance, privacy, and compliance standards. A strategic mindset coupled with the ability to execute hands-on tasks when necessary is highly valued. While exposure to the lending domain, ML pipelines, or AI integrations is considered advantageous, a background in fintech, lending, or regulatory data environments is also beneficial. As part of our team, you will have the opportunity to lead data-first transformation and develop products that drive AI adoption. You will enjoy the autonomy to design, build, and scale modern data architecture within a forward-thinking, collaborative, and tech-driven culture. Additionally, you will have access to the latest tools and technologies in the data ecosystem. Location: Chennai Experience: 10-15 Years | Full-Time | Work From Office If you are ready to shape the future of data alongside us, we invite you to apply now and embark on this exciting journey!,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

2 - 3 Lacs

Sriperumbudur, Tambaram, Chennai

Work from Office

Role & responsibilities Responsible for safely and efficiently operating, maintaining, and monitoring boiler systems that produce hot water for various applications like heating or industrial processes . They ensure the equipment runs smoothly, adheres to safety regulations, and performs routine maintenance to prevent breakdowns Preferred candidate profile

Posted 3 weeks ago

Apply

7.0 - 8.0 years

9 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, PySpark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale . Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model:: Direct placement with client This is remote role Shift timings::10 AM to 7 PM

Posted 1 month ago

Apply

3.0 - 7.0 years

2 - 6 Lacs

Gurugram

Work from Office

- Bachelor s or master s degree in data science, Computer Science, Statistics, or a related field. Minimum of 7 years of experience in data analytics or a related field. Proficiency in data analysis tools and programming languages Required Candidate profile Experience with data visualization tools (e.g., Tableau, Power BI) Knowledge of machine learning techniques and statistical analysis Knowledge on stream processing for near real-time analytics is must

Posted 1 month ago

Apply

9.0 - 14.0 years

3 - 7 Lacs

Noida

Work from Office

We are looking for a skilled Data Engineer with 9 to 15 years of experience in the field. The ideal candidate will have expertise in designing and developing data pipelines using Confluent Kafka, ksqlDB, and Apache Flink. Roles and Responsibility Design and develop data pipelines for real-time and batch data ingestion and processing using Confluent Kafka, ksqlDB, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources, including databases, APIs, and message queues. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identifying bottlenecks and implementing optimizations. Job Bachelor's degree or higher from a reputed university. 8 to 10 years of total experience, with a majority related to ETL/ELT big data and Kafka. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema

Posted 1 month ago

Apply

11.0 - 16.0 years

40 - 45 Lacs

Pune

Work from Office

Role Description This role is for a Senior business functional analyst for Group Architecture. This role will be instrumental in establishing and maintaining bank wide data policies, principles, standards and tool governance. The Senior Business Functional Analyst acts as a link between the business divisions and the data solution providers to align the target data architecture against the enterprise data architecture principles, apply agreed best practices and patterns. Group Architecture partners with each division of the bank to ensure that Architecture is defined, delivered, and managed in alignment with the banks strategy and in accordance with the organizations architectural standards. Your key responsibilities Data Architecture: The candidate will work closely with stakeholders to understand their data needs and break out business requirements into implementable building blocks and design the solution's target architecture. AI/ML: Identity and support the creation of AI use cases focused on delivery the data architecture strategy and data governance tooling. Identify AI/ML use cases and architect pipelines that integrate data flows, data lineage, data quality. Embed AI-powered data quality, detection and metadata enrichment to accelerate data discoverability. Assist in defining and driving the data architecture standards and requirements for AI that need to be enabled and used. GCP Data Architecture & Migration: A strong working experience on GCP Data architecture is must (BigQuery, Dataplex, Cloud SQL, Dataflow, Apigee, Pub/Sub, ...). Appropriate GCP architecture level certification. Experience in handling hybrid architecture & patterns addressing non- functional requirements like data residency, compliance like GDPR and security & access control. Experience in developing reusable components and reference architecture using IaaC (Infrastructure as a code) platforms such as terraform. Data Mesh: The candidate is expected to have proficiency in Data Mesh design strategies that embrace the decentralization nature of data ownership. The candidate must have good domain knowledge to ensure that the data products developed are aligned with business goals and provide real value. Data Management Tool: Access various tools and solutions comprising of data governance capabilities like data catalogue, data modelling and design, metadata management, data quality and lineage and fine-grained data access management. Assist in development of medium to long term target state of the technologies within the data governance domain. Collaboration: Collaborate with stakeholders, including business leaders, project managers, and development teams, to gather requirements and translate them into technical solutions. Your skills and experience Demonstrable experience in designing and deploying AI tooling architectures and use cases Extensive experience in data architecture, within Financial Services Strong technical knowledge of data integration patterns, batch & stream processing, data lake/ data lake house/data warehouse/data mart, caching patterns and policy bases fine grained data access. Proven experience in working on data management principles, data governance, data quality, data lineage and data integration with a focus on Data Mesh Knowledge of Data Modelling concepts like dimensional modelling and 3NF. Experience of systematic structured review of data models to enforce conformance to standards. High level understanding of data management solutions e.g. Collibra, Informatica Data Governance etc. Proficiency at data modeling and experience with different data modelling tools. Very good understanding of streaming and non-streaming ETL and ELT approaches for data ingest. Strong analytical and problem-solving skills, with the ability to identify complex business requirements and translate them into technical solutions.

Posted 1 month ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Dubai, Pune, Chennai

Hybrid

Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .

Posted 1 month ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Hyderabad

Work from Office

6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.

Posted 1 month ago

Apply

8.0 - 13.0 years

5 - 10 Lacs

Bengaluru

Work from Office

6+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements. Ensure performance tuning, fault tolerance, and reliability of distributed data processing systems.

Posted 1 month ago

Apply

8.0 - 13.0 years

8 - 12 Lacs

Hyderabad

Work from Office

10+ years of experience with Java Spark. Strong understanding of distributed computing, big data principles, and batch/stream processing. Proficiency in working with AWS services such as S3, EMR, Glue, Lambda, and Athena. Experience with Data Lake architectures and handling large volumes of structured and unstructured data. Familiarity with various data formats. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Design, develop, and optimize large-scale data processing pipelines using Java Spark Build scalable solutions to manage data ingestion, transformation, and storage in AWS-based Data Lake environments. Collaborate with data architects and analysts to implement data models and workflows aligned with business requirements.

Posted 1 month ago

Apply

8.0 - 13.0 years

4 - 8 Lacs

Hyderabad

Work from Office

This role will be instrumental in building and maintaining robust, scalable, and reliable data pipelines using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. The ideal candidate will have a strong understanding of data streaming concepts, experience with real-time data processing, and a passion for building high-performance data solutions. This role requires excellent analytical skills, attention to detail, and the ability to work collaboratively in a fast-paced environment. Essential Responsibilities Design & develop data pipelines for real time and batch data ingestion and processing using Confluent Kafka, ksqlDB, Kafka Connect, and Apache Flink. Build and configure Kafka Connectors to ingest data from various sources (databases, APIs, message queues, etc.) into Kafka. Develop Flink applications for complex event processing, stream enrichment, and real-time analytics. Develop and optimize ksqlDB queries for real-time data transformations, aggregations, and filtering. Implement data quality checks and monitoring to ensure data accuracy and reliability throughout the pipeline. Monitor and troubleshoot data pipeline performance, identify bottlenecks, and implement optimizations. Automate data pipeline deployment, monitoring, and maintenance tasks. Stay up-to-date with the latest advancements in data streaming technologies and best practices. Contribute to the development of data engineering standards and best practices within the organization. Participate in code reviews and contribute to a collaborative and supportive team environment. Work closely with other architects and tech leads in India & US and create POCs and MVPs Provide regular updates on the tasks, status and risks to project manager The experience we are looking to add to our team Required Bachelors degree or higher from a reputed university 8 to 10 years total experience with majority of that experience related to ETL/ELT, big data, Kafka etc. Proficiency in developing Flink applications for stream processing and real-time analytics. Strong understanding of data streaming concepts and architectures. Extensive experience with Confluent Kafka, including Kafka Brokers, Producers, Consumers, and Schema Registry. Hands-on experience with ksqlDB for real-time data transformations and stream processing. Experience with Kafka Connect and building custom connectors. Extensive experience in implementing large scale data ingestion and curation solutions Good hands on experience in big data technology stack with any cloud platform - Excellent problemsolving, analytical, and communication skills. Ability to work independently and as part of a team Good to have Experience in Google Cloud Healthcare industry experience Experience in Agile

Posted 1 month ago

Apply

3.0 - 9.0 years

18 - 22 Lacs

Hyderabad

Work from Office

At Skillsoft, we are all about making work matter . We believe every team member has the potential to be AMAZING . We are bold, sharp, driven and most of all, true . Join us in our quest to democratize learning and help individuals unleash their edge. OVERVIEW : To succeed in this challenging journey, we have set up multiple co-located teams across the globe (Hyderabad, US, Europe), embracing the scaled agile framework, a Micro Services approach combined with the DevOps model. We have passionate engineers working full time on this new platform in Hyderabad and it’s only the beginning. You will get a chance to work with brilliant people and some of the best development and design teams, in addition to working with cutting edge technologies such as React, Java/Node JS, Docker, Kubernetes, AWS. We are looking for exceptional Java/Node based full stack developers to join our team . You will work alongside the Architect and DevOps teams to fully form an autonomous development squad and be in-charge of a part of the product. Overall, the Principal Software Engineer role at Skillsoft offers a challenging and rewarding opportunity for individuals who are passionate about technology, learning, and making a difference in the world. OPPORTUNITY HIGHLIGHTS: Technical leadership: As a Principal Software Engineer, you will be responsible for technical leadership, providing guidance and mentoring to other team members, and ensuring that projects are completed on time and to the highest standards. Cutting-edge technology : Skillsoft is a technology-driven company that is constantly exploring new technologies to enhance the learning experience for their customers. As a Principal Software Engineer, you will have the opportunity to work with cutting-edge technology and help drive innovation. Agile environment : Skillsoft follows agile methodologies, which means that you will be part of a fast-paced, collaborative environment where you will have the opportunity to work on multiple projects simultaneously. Career growth: Skillsoft is committed to helping their employees grow their careers. As a Principal Software Engineer, you will have access to a wide range of learning and development opportunities, including training programs, conferences, and mentorship. Impactful work: Skillsoft's mission is to empower people through learning, and as a Principal Software Engineer, you will be a key contributor to achieving this mission. You will have the opportunity to work on products and features that have a significant impact on the learning and development of individuals and organizations worldwide. SKILLS & QUALIFICATIONS: Minimum 9+ years of software engineering development experience developing cloud-based enterprise solutions. Proficient in programming languages (Java, JavaScript, HTML5, CSS) Proficient in JavaScript frameworks (Node.js, React, Redux, Angular, Express.js) Proficient with frameworks (Spring Boot, Stream processing) Strong knowledge working with REST API, Web services and SAML integrations Proficient in working with databases preferably Postgres. Experienced working on DevOps tools (Docker, Kubernetes, Ansible, AWS) Experience with code versioning tools, preferably Git ( Github , Gitlab, etc ) and the feature branch workflow Working Experience on Kafka, RabbitMq (messaging queue systems) Sound knowledge on design principles and design patterns Strong problem solving and analytical skills and understanding of various data structures and algorithms. Must know how to code applications on Unix/Linux based systems. Experience with build automation tools like Maven, Gradle, NPM, WebPack , Grunt . Sound troubleshooting skills to address code bugs, performance issues and environment issues that may arise. Good understanding of the common security concerns of high volume publicly exposed systems Experience in working with Agile/Scrum environment. Strong analytical skills and the ability to understand complexities and how components connect and relate to each other OUR VALUES WE ARE PASSIONATELY COMMITTED TO LEADERSHIP, LEARNING, AND SUCCESS. WE EMBRACE EVERY OPPORTUNITY TO SERVE OUR CUSTOMERS AND EACH OTHER AS: ONE TEAM OPEN AND RESPECTFUL CURIOUS READY TRUE

Posted 1 month ago

Apply

3.0 - 7.0 years

7 - 11 Lacs

Chennai

Work from Office

Career Area: Technology, Digital and Data : Your Work Shapes the World at Caterpillar Inc. When you join Caterpillar, you'rejoining a global team who cares not just about the work we do but also about each other. We are the makers, problem solvers, and future world builders who are creating stronger, more sustainable communities. We don'tjust talk about progress and innovation here we make it happen, with our customers, where we work and live. Together, we are building a better world, so we can all enjoy living in it. JOB PURPOSE:The Software Engineer contributes to design, development and deployment of Caterpillars state-of-the-art digital platform. This position will build a world class platform to host a wide range of digital applications.JOB RELATED STATISTICSIndeterminate JOB DUTIESResponsibilities of the incumbents are across functional lines with individuals assigned in new program development and/or maintenance of existing mobile, WEB, Cloud, server and/or distributed computing systems. 1.Competent to perform all programming, project management, and development assignments without close supervision; normally assigned the more complex aspects of systems work.2.Works directly on complex application/technical problem identification and resolution, including responding to off-shift and weekend support calls.3.Works independently on complex systems or infrastructure components that may be used by one or more applications or systems.4.Drives application development focused around delivering business valuable features5.Maintains high standards of software quality within the team by establishing good practices and habits6.Identifies and encourage areas for growth and improvement within the team7.Communicate with end users and internal customers to help direct development, debugging, and testing of application software for accuracy, integrity, interoperability, and completeness8.Performs integrated testing and customer acceptance testing of components that requires careful planning and execution to ensure timely, quality results.9.Employee is also responsible for performing other job duties as assigned by Caterpillar management from time to time. The position manages the completion of its own work assignments and coordinates work with others. Based on past experiences and knowledge, the incumbent normally works independently with minimal management input and review of end results. Typical customers include Caterpillar customers, dealers, other external companies who purchase services offered by Caterpillar as well as internal business unit and/or service center groups. The position is challenged to quickly and correctly identify problems that may not be obvious. The incumbent solves problems by determining the best course of action, within departmental guidelines, from many existing solutions. The incumbent sets priorities and establishes a work plan in order to complete broadly defined assignments and achieve desired results. The position participates in brainstorming sessions focused on developing new approaches to meeting quality goals in the measure(s) stated. Basic qualifications:Position requires a four-year degree from an accredited college or university.One year or more of software development experience or a masters degree in computer science or related field.One year or more of experience in designing and developing software applications in Java or Scala or a masters degree in computer science or related field.Top candidates will also have:Proven experience in some of the following,o6 months to 1 year of Angular (web) or Ionic (mobile app) project experience and interest to work on mobile application development (must)oAt least 6 months of hands-on experience in native iOS or native Android ( good to have)oExposure to react native (good to have)oDesigning, developing, deploying and maintaining software at scale.oDeveloping software applications using relational and Nosql databases.oApplication architectural patterns, such as MVC, Microservices, Event-driven, etc.oDeploying software using CI/CD tools such as Jenkins, GoCD, Azure Devops etc.oDeploying and maintaining software using public clouds such as AWS or Azure.oWorking within an Agile framework (ideally Scrum)Strong understanding and/or experience in some of the following,oBatch or stream processing systems such as Apache Spark, Flink, Akka, StormoMessage brokers such as Kafka, Rabbitmq, AWS SQS, AWS SNS, Apache ActiveMQ, Kinesis.oExperience designing well-defined Restful APIsoExperience writing API proxies on platforms such as Apigee Edge, AWS API Gateway or Azure API GatewayoHands one experience with API tools such as Swagger, Postman and AssertibleoTest driven development and behavior driven development.oHands on experience with testing tools such as Selenium and Cucumber and their integration into CI/CD pipelines.oDatastores such as MongoDB, Cassandra, Redis, Elasticsearch, MySQL, Oracle.Must demonstrate solid knowledge of computer science fundamentals like data structures and algorithms.Ability to work under pressure and within time constraintsPassion for technology and an eagerness to contribute to a team-oriented environmentBachelors degree in Computer science or Electrical engineering or related field is required Posting Dates: June 18, 2025 - June 24, 2025 Caterpillar is an Equal Opportunity Employer. Not ready to applyJoin our Talent Community.

Posted 1 month ago

Apply

12.0 - 15.0 years

55 - 60 Lacs

Ahmedabad, Chennai, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Platform Engineer to build and maintain scalable, secure, and reliable data infrastructure for analytics and real-time processing. Key Responsibilities: Design and manage data pipelines, storage layers, and ingestion frameworks. Build platforms for batch and streaming data processing (Spark, Kafka, Flink). Optimize data systems for scalability, fault tolerance, and performance. Collaborate with data engineers, analysts, and DevOps to enable data access. Enforce data governance, access controls, and compliance standards. Required Skills & Qualifications: Proficiency with distributed data systems (Hadoop, Spark, Kafka, Airflow). Strong SQL and experience with cloud data platforms (Snowflake, BigQuery, Redshift). Knowledge of data warehousing, lakehouse, and ETL/ELT pipelines. Experience with infrastructure as code and automation. Familiarity with data quality, security, and metadata management. Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Srinivasa Reddy Kandi Delivery Manager Integra Technologies

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies