Jobs
Interviews

31 Data Streaming Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Data Architect at Cigna International Markets, your primary responsibility is to define commercially aware and technically astute solutions that align with the architectural direction while considering project delivery constraints. You will be an integral part of the Architecture function, collaborating with senior stakeholders to establish strategic direction and ensure that business solutions reflect this intent. Your role will involve leading and defining effective business solutions within complex project environments, showcasing the ability to cultivate strong relationships across Business, IT, and 3rd Party stakeholders. Your main duties and responsibilities will include performing key enterprise-wide Data Architecture tasks within International Markets, particularly focusing on on-premise and cloud solution deployments. You will engage proactively with various stakeholders to ensure that business investments result in cost-effective and suitable data-driven solutions. Additionally, you will assist sponsors in creating compelling business cases for change and work with Solution Architects to define data solution designs that meet business and operational expectations. As a Data Architect, you will own and manage data models and design artifacts, offering guidance on best practices and standards for customer-centric data delivery and management. You will advocate for data-driven design within an agile delivery framework and actively participate in the full project lifecycle, from shaping estimates to governing solutions during development. Furthermore, you will be responsible for identifying and managing risks, issues, and assumptions throughout the project lifecycle and play a lead role in selecting 3rd Party solutions. Your skills and experience should include a minimum of 10 years in IT with 5 years in a Data Architecture or Data Design role. You should have experience leading data design projects and delivering significant assets to organizations such as Data Warehouse, Data Lake, or Customer 360 Data Platform. Proficiency in various data capabilities like data modeling, database design, data migration, and data integration (ETL/ELT and data streaming) is essential. Familiarity with toolsets and platforms like AWS, SQL Server, Qlik, and Collibra is preferred. A successful track record of working in globally dispersed teams, technical acumen across different domains, and a collaborative mindset are desirable attributes. Your commercial awareness, financial planning skills, and ability to work with diverse stakeholders to achieve mutually beneficial solutions will be crucial in this role. Join Cigna Healthcare, a division of The Cigna Group, and contribute to our mission of advocating for better health and improving lives.,

Posted 1 day ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Senior Platform Engineer at Kenvue Data Platforms, you will have an exciting opportunity to be part of our growing Data & Analytics product line team. Your role involves collaborating closely with various teams such as Business partners, Product Owners, Data Strategy, Data Platform, Data Science, and Machine Learning (MLOps) to drive innovative data products for end users. You will play a key role in shaping the overall solution and data platforms, ensuring their stability, responsiveness, and alignment with business and cloud computing needs. Your expertise will be crucial in optimizing business outcomes and contributing to the growth and success of the organization. Your responsibilities will include providing leadership for data platforms in partnership with architecture teams, conducting proof of concepts to deliver secure and scalable platforms, staying updated on emerging technologies, mentoring other platform engineers, and focusing on the execution and delivery of reliable data platforms. You will work closely with Business Analytics leaders to understand business needs and create value through technology. Additionally, you will lead data platforms operations, build next-generation data and analytics capabilities, and drive the adoption and scaling of data products within the organization. To be successful in this role, you should have an undergraduate degree in Technology, Computer Science, applied data sciences, or related fields, with an advanced degree being preferred. You should possess strong analytical skills, effective communication abilities, and a proven track record in developing and maintaining data platforms. Experience with cloud platforms such as Azure, GCP, AWS, cloud-based databases, data streaming platforms, and Agile methodology will be essential. Your ability to define platforms tech stack, prioritize work items, and work effectively in a diverse and inclusive company culture will be critical to your success in this role. If you are passionate about leveraging data and technology to drive business growth, make a positive impact on personal health, and shape the future of data platforms, then this role at Kenvue Data Platforms is the perfect opportunity for you. Join us in our mission to empower millions of people every day through insights, innovation, and care. We look forward to welcoming you to our team! Location: Asia Pacific-India-Karnataka-Bangalore Function: Digital Product Development Qualifications: - Undergraduate degree in Technology, Computer Science, applied data sciences or related fields; advanced degree preferred - Strong interpersonal and communication skills, ability to explain digital concepts to business leaders and vice versa - 4 years of data platforms experience in Consumer/Healthcare Goods companies - 6 years of progressive experience in developing and maintaining data platforms - Minimum 5 years hands-on experience with Cloud Platforms and cloud-based databases - Experience with data streaming platforms, microservices, and data integration - Proficiency in Agile methodology within DevSecOps model - Ability to define platforms tech stack to address data challenges - Proven track record of delivering high-profile projects within defined resources - Commitment to diversity, inclusion, and equal opportunity employment,

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At JP Morgan Chase, you will play a crucial role in delivering end-to-end data pipeline solutions on cloud infrastructure to enhance the digital banking experience for our customers. Your expertise will help us leverage the latest technologies and industry best practices to build innovative business products and ensure a seamless and secure banking environment. Your responsibilities will include using domain modeling techniques to create top-tier business products, structuring software for easy understanding and evolution, and implementing scalable architectural patterns to avoid single points of failure. You will develop secure code to safeguard our customers and systems from malicious activities and promptly address and resolve any issues that may arise. Additionally, you will focus on optimizing data processing, monitoring system performance, and ensuring reliable and efficient operations. Your role will also involve updating technologies and patterns continuously, supporting products throughout their lifecycle, and managing incidents effectively to minimize downtime for end-users. To excel in this role, you should have formal training or certification in data engineering concepts, along with recent hands-on experience as a data engineer. Proficiency in coding with Python, designing effective tests, and strong communication skills in English are essential. Experience with cloud technologies, distributed systems, data transformation frameworks, and data pipeline orchestration tools will be beneficial for this position. Moreover, your ability to manage large volumes of data, optimize data processing, and work with event-based architecture, data streaming, and messaging frameworks will be valuable. You should also be capable of coaching team members on coding practices, design principles, and implementation patterns, as well as managing stakeholders and prioritizing tasks across multiple work streams. Preferred qualifications include experience in a highly regulated environment, familiarity with AWS cloud technologies, expertise in data governance frameworks, and an understanding of incremental data processing and versioning. Knowledge of RESTful APIs and web technologies will be an added advantage for this role.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

vadodara, gujarat

On-site

The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters. You should have experience that demonstrates proficiency and ease with one or more programming languages, quality assurance, scripting languages, and operating systems. Solid hands-on development experience in backend technologies including JAVA, J2EE, SQL and related technology stack, preferably incorporating open-source libraries is required. Hands-on experience with Spring Framework, Spring Boot, MongoDB, and JPA / Hibernate is a strong plus. Exposure to frameworks like Karate and TestNG is good to have to carry out QA tasks. Knowledge of test automation frameworks is also a strong plus. A strong foundation in data structures, algorithms, problem-solving, and complexity analysis is expected. You should possess strong designing, analytical, programming, and communication skills, along with an aptitude for building stable solutions. Knowledge about writing unit test cases using frameworks like JUnit / TestNG is desired. Some demonstrated experience with n-tier web application development and experience in the latest JDK is desired. Java / J2EE certification is a Plus. Experience with web services standards and related technologies (XML, JSON, REST, SOAP, WS*, AXIS, JERSEY) is nice to have. Demonstrable experience utilizing object-oriented patterns and design best practices is a strong plus. Exposure to tools like Postman / any REST Client is desired. LINUX skills are required. Exposure to frameworks like Karate and TestNG is good to have to carry out QA tasks. Some knowledge of test automation frameworks is also a strong plus. Working knowledge of Continuous Integration / Delivery, and Test Driven Development is good to have. Knowledge in microservices and hands-on experience on container platforms like Kubernetes, Docker, and OpenShift would be a strong plus. Hands-on experience in distributed architecture & data streaming approaches like Kafka and RabbitMQ is a strong plus. Mandatory Skills: Fullstack Java Enterprise Experience: 5-8 Years Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

You will be part of a dynamic team at JP Morgan Chase, dedicated to providing exceptional value and a seamless experience to customers as a trusted financial institution. Chase UK, our digital banking arm, is at the forefront of transforming the banking experience through intuitive and enjoyable customer journeys. Leveraging the strong foundation of trust built by millions of customers in the US, we are rapidly expanding our presence in the UK and soon across Europe, shaping the bank of the future. This is your opportunity to join us and create a significant impact. Your responsibilities will include delivering end-to-end data pipeline solutions on cloud infrastructure by harnessing the latest technologies and industry best practices. You will utilize domain modeling techniques to develop top-notch business products, structuring software for easy understanding, testing, and evolution. Building solutions that are resilient and scalable, you will focus on developing secure code to safeguard customers and the institution from potential threats. Timely investigation and resolution of issues, along with ensuring zero downtime during releases, will be crucial aspects of your role. Optimizing data processing, monitoring performance, and ensuring reliable and efficient systems operation will also be part of your duties. Continuous learning and updating of technologies and patterns, as well as providing support throughout the product lifecycle, including production and incident management, will be key components of your role. To excel in this role, you should possess formal training or certification in data engineering concepts, along with recent hands-on experience as a data engineer. Proficiency in coding with Python, designing effective tests, and strong written and verbal communication skills in English are essential. Experience with cloud technologies, distributed systems, data transformation frameworks, and data pipeline orchestration tools will be beneficial. Managing large volumes of data, optimizing data processing, and understanding event-based architecture, data streaming, and messaging frameworks are also important skills. Your ability to coach team members, manage stakeholders, prioritize effectively across multiple work streams, and adapt to a fast-paced environment will be valuable. Preferred qualifications include experience in a highly regulated industry, familiarity with AWS cloud technologies, expertise in data governance frameworks, understanding of incremental data processing and versioning, and knowledge of RESTful APIs and web technologies.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

At JP Morgan Chase, we prioritize providing exceptional value and a seamless experience to our customers as a trusted financial institution. Chase UK was established with the goal of revolutionizing digital banking through intuitive and enjoyable customer journeys. Backed by the trust of millions of customers in the US, our presence in the UK is rapidly expanding, with plans to extend across Europe. We are in the process of building the bank of the future from scratch, offering you the opportunity to join us and contribute significantly to our mission. As a part of our team, your responsibilities will include delivering end-to-end data pipeline solutions on cloud infrastructure by leveraging cutting-edge technologies and industry best practices. You will apply domain modeling techniques to develop top-notch business products and structure software for enhanced understanding, testing, and evolution. Building solutions with scalable architectural patterns to avoid single points of failure, you will also prioritize developing secure code to safeguard our customers and organization from malicious threats. Timely investigation and resolution of issues, ensuring they do not recur, and facilitating zero downtime releases for end-users are crucial aspects of your role. Additionally, optimizing data reading and writing, monitoring performance, and updating technologies and patterns continuously will be part of your daily tasks. Supporting products throughout their lifecycle, including production and incident management, will also fall under your purview. To excel in this role, you should possess formal training or certification in data engineering concepts, along with recent hands-on professional experience as a data engineer. Proficiency in coding with Python, designing and implementing effective tests, and excellent written and verbal communication skills in English are essential requirements. Experience with cloud technologies, distributed systems, data transformation frameworks, and data pipeline orchestration tools is vital. Managing large volumes of data, optimizing data processing, understanding event-based architecture, data streaming, and messaging frameworks are key competencies we seek. Your ability to coach team members on coding practices, design principles, and implementation patterns, as well as manage stakeholders and prioritize effectively across multiple work streams, will be highly valued. Additionally, preferred qualifications for this role include experience in a highly regulated environment/industry, familiarity with AWS cloud technologies, expertise with data governance frameworks, understanding of incremental data processing and versioning, and knowledge of RESTful APIs and web technologies.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Senior Azure Cloud Data Engineer based in Bangalore, you will be instrumental in the processing and analysis of IoT data derived from our connected products. Your primary objective will be to provide valuable insights to both our internal teams and external clients. Your core responsibilities will include the development and maintenance of scalable and high-performance analytical data solutions leveraging Azure cloud technologies. With 3+ years of experience in Azure Analytics tools such as Data Factory, Synapse, and Event Hubs, you will possess strong proficiency in SQL, Python, and PySpark. Your expertise will extend to ETL/ELT processes, data streaming utilizing Kafka/Event Hubs, and handling unstructured data. A sound understanding of data modeling, data governance, and real-time processing will be crucial in this role. Apart from technical proficiencies, you will demonstrate soft skills such as a strong analytical and problem-solving mindset, exceptional verbal and written communication skills, and the ability to work effectively both independently and as part of a team. Your attention to detail, quality-focused approach, organizational abilities, and multitasking skills will be key to success in this role. Furthermore, your adaptability to a fast-paced and evolving environment, coupled with a self-motivated and proactive attitude, will be highly valued. If you are seeking a challenging opportunity to work in a dynamic environment that encourages innovation and collaboration, this position is ideal for you. Join our team and be part of a forward-thinking organization dedicated to leveraging cutting-edge technologies to drive impactful business outcomes.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a passionate and experienced Software Engineer, you have the exciting opportunity to join the API & Exports team at Meltwater. This team plays a crucial role in enabling programmatic access to data, handling thousands of exports daily, enhancing API usability, and managing API integrations and performance at scale. Your work will involve expanding and optimizing export functionalities, developing scalable APIs, and implementing robust monitoring and management practices. Join this high-impact team located at the core of the data delivery platform. Your responsibilities will include owning the design, development, and optimization of API and export features, collaborating with product managers and senior engineers to define functionality and scale, enhancing developer experience by simplifying API consumption and integration, participating in building export pipelines, streaming architectures, and webhook integrations, maintaining high observability and reliability standards using tools like Coralogix, CloudWatch, and Grafana, as well as contributing to on-call rotations and incident response for owned services. To excel in this role, you should bring at least 5 years of software engineering experience with a strong focus on Golang (preferred), Java, or C++, experience in designing and developing RESTful APIs, familiarity with cloud-native applications (preferably AWS), a good understanding of microservice architecture and backend design principles, solid knowledge of Postgres, Redis, and ideally DynamoDB. It would be advantageous if you also have experience with asynchronous or event-driven architectures using tools like SQS, SNS, or webhooks, exposure to DevOps workflows and tools such as Terraform, Docker, Kubernetes, etc., experience working with data exports, reporting systems, or data streaming, expertise in improving developer experience around APIs (e.g., OpenAPI, Swagger, static site generators), familiarity with JWT authentication, API gateways, and rate limiting strategies, experience in accessibility and compliance standards for APIs and data handling, and proficiency with observability tools and practices. Meltwater's tech stack includes languages like Golang and some JavaScript/TypeScript, infrastructure on AWS with services like S3, Lambda, SQS, SNS, CloudFront, and Kubernetes (Helm), databases such as Postgres, Redis, DynamoDB, monitoring tools like Coralogix, Grafana, OpenSearch, and CI/CD & IaC practices through GitHub Actions and Terraform. Joining Meltwater offers you flexible paid time off options, comprehensive health insurance, employee assistance programs, a CalmApp subscription, a hybrid work style, a family leave program, inclusive community, ongoing professional development opportunities, and a vibrant work environment in Hitec city, Hyderabad. Embrace the opportunity to make a difference, learn, grow, and succeed in a diverse and innovative environment at Meltwater.,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior ETL & Data Streaming Engineer at DataFlow Group, you will have the opportunity to utilize your extensive expertise in designing, developing, and maintaining robust data pipelines. With over 10 years of experience in the field, you will play a pivotal role in ensuring the scalability, fault-tolerance, and performance of our ETL processes. Your responsibilities will include architecting and building both batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka, or AWS Kinesis. You will collaborate closely with data architects, data scientists, and business stakeholders to translate data requirements into efficient pipeline solutions and ensure data quality, integrity, and security across all storage solutions. In addition to monitoring, troubleshooting, and optimizing existing data pipelines, you will also be responsible for developing and maintaining comprehensive documentation for all ETL and streaming processes. Your role will involve implementing data governance policies and best practices within the Data Lake and Data Warehouse environments, as well as mentoring junior engineers to foster a culture of technical excellence and continuous improvement. To excel in this role, you should have a strong background in data engineering, with a focus on ETL, ELT, and data pipeline development. Your deep expertise in ETL tools, data streaming technologies, and AWS data services will be essential for success. Proficiency in SQL and at least one scripting language for data manipulation, along with strong database skills, will also be valuable assets in this position. If you are a proactive problem-solver with excellent analytical skills and strong communication abilities, this role offers you the opportunity to stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Join us at DataFlow Group and be part of a team dedicated to making informed, cost-effective decisions through cutting-edge data solutions.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

Job Description: As a Senior Azure Cloud Data Engineer based in Bangalore with a hybrid working model, you will have a pivotal role in processing and analyzing IoT data generated by our connected products. Your primary objective will be to derive meaningful insights from this data to cater to the needs of both our internal teams and external clients. Your core responsibilities will revolve around the creation and upkeep of scalable, high-performance analytical data solutions utilizing Azure cloud technologies. Key Skills: With a minimum of 3 years of hands-on experience in utilizing Azure Analytics tools such as Data Factory, Synapse, and Event Hubs, you will be well-versed in these technologies. Proficiency in SQL, Python, and PySpark is essential for this role. Your expertise should also extend to ETL/ELT processes, data streaming technologies like Kafka/Event Hubs, and handling unstructured data. A sound understanding of data modeling, data governance, and real-time data processing is crucial. Familiarity with DevOps practices, CI/CD pipelines, and Agile methodologies will be an added advantage. Soft Skills: A strong analytical acumen coupled with exceptional problem-solving skills will be key strengths in this role. Your communication skills, both verbal and written, should be exemplary. The ability to work autonomously as well as collaboratively within a team is vital. Being detail-oriented and quality-focused is a must for delivering accurate and efficient results. Your organizational skills and adeptness at multitasking will aid in managing various responsibilities effectively. The capacity to adapt swiftly in a dynamic and fast-paced environment is essential. A self-driven and proactive approach towards tasks will be highly beneficial in excelling in this role.,

Posted 2 weeks ago

Apply

13.0 - 20.0 years

30 - 45 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is hiring DATA ENGINEERING - Solution Architect for one of our leading MNC client. PFB the details for your better understanding : 1. WORK LOCATION : PUNE 2. Job Role: DATA ENGINEERING - Solution Architect 3. EXPERIENCE : 13+ yrs 4. CTC Range: Rs. 35 LPA to Rs. 50 LPA 5. Work Type : WFO Hybrid ****** Looking for SHORT JOINERS ****** Job Description : Who are we looking for : Architectural Vision & Strategy: Define and articulate the technical vision, strategy and roadmap for Big Data, data streaming, and NoSQL solutions , aligning with overall enterprise architecture and business goals. Required Skills : 13+ years of progressive EXP in software development, data engineering and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies: Apache Spark: Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem: Strong understanding of HDFS, YARN, Hive and other related Hadoop components . Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase OR MongoDB OR Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language ( Python, Scala, Java ) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms: Extensive EXP in designing and implementing solutions on at least one major cloud platform ( AWS, Azure, GCP ), leveraging their Big Data, streaming, and compute services . Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns: Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh ) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC) and automated deployment strategies for data platforms . ****** Looking for SHORT JOINERS ****** Interested, don't hesitate to call NAK @ 9840035825 / 9244912300 for IMMEDIATE response. Best, ANANTH | GSN | Google review : https://g.co/kgs/UAsF9W

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior ETL & Data Streaming Engineer at DataFlow Group, a global provider of Primary Source Verification solutions and background screening services, you will be a key player in the design, development, and maintenance of robust data pipelines. With over 10 years of experience, you will leverage your expertise in both batch ETL processes and real-time data streaming technologies to ensure efficient data extraction, transformation, and loading into our Data Lake and Data Warehouse. Your responsibilities will include designing and implementing highly scalable ETL processes using industry-leading tools, as well as architecting batch and real-time data streaming solutions with technologies like Talend, Informatica, Apache Kafka, or AWS Kinesis. You will collaborate with data architects, data scientists, and business stakeholders to understand data requirements and translate them into effective pipeline solutions, ensuring data quality, integrity, and security across all storage solutions. Monitoring, troubleshooting, and optimizing existing data pipelines for performance, cost-efficiency, and reliability will be a crucial part of your role. Additionally, you will develop comprehensive documentation for all ETL and streaming processes, contribute to data governance policies, and mentor junior engineers to foster a culture of technical excellence and continuous improvement. To excel in this position, you should have 10+ years of progressive experience in data engineering, with a focus on ETL, ELT, and data pipeline development. Your deep expertise in ETL tools like Talend, proficiency in Data Streaming Technologies such as AWS Glue and Apache Kafka, and extensive experience with AWS data services like S3, Glue, and Lake Formation will be essential. Strong knowledge of traditional data warehousing concepts, dimensional modeling, programming languages like SQL and Python, and relational and NoSQL databases will also be required. If you are a problem-solver with excellent analytical skills, strong communication abilities, and a passion for staying updated on emerging technologies and industry best practices in data engineering, ETL, and streaming, we invite you to join our team at DataFlow Group and make a significant impact in the field of data management.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As a Technical Program Manager (TPM), you will play a crucial role in establishing a strong connection between the business and engineering departments. Your primary responsibility will involve working on intricate business constraints and translating them into essential product requirements and features. Your technical expertise will be instrumental in guiding the team from the initial project stages through to its successful launch within strict timelines. A key aspect of your role will be to demonstrate exceptional leadership skills, inspiring teams to strive for excellence and develop top-notch products. Specific requirements for this role include having a fundamental understanding of various technologies, data orchestration tools, and frameworks such as Apache Airflow, API Integrations, Micro-services Architecture, and CI/CD. Your strong communication skills will be vital in ensuring effective collaboration within the team. Additionally, you should possess knowledge of data modeling and ETL processes, along with familiarity with data streaming and real-time data processing technologies. Proficiency in utilizing data visualization tools like Tableau and Power BI to generate reports and dashboards will be beneficial for this role. An important aspect of the job involves automating repetitive tasks and workflows using scripting or automation tools. It is imperative to stay updated with the latest data technologies and industry trends, showcasing your commitment to continuous learning. Furthermore, you should be capable of explaining technical concepts and flows to non-technical audiences in a clear and concise manner. Your written communication skills must be articulate, enabling you to convey complex information effectively to engineers and Software Development Engineers in Test (SDETs). Building and nurturing strong relationships, as well as collaborating with a diverse team comprising engineering, product, and business stakeholders, will be essential for achieving success in this role.,

Posted 3 weeks ago

Apply

9.0 - 14.0 years

20 - 35 Lacs

Bhubaneswar, Hyderabad, Mumbai (All Areas)

Hybrid

Strong knowledge & working exp. in Temenos Data hub and Temenos Analytics. Should have exposure to solutioning in TDH namely - validating requirements from Bank, development reports out of TDH/Analytics and support integration around TDH/Analytics. Required Candidate profile Should have exp. on Data stream and develop skill set within Team.TLC certification on Temenos data platforms is added advantage. Overall ownership on the programs around data platform. Exp-9-15 Yrs

Posted 3 weeks ago

Apply

5.0 - 8.0 years

20 - 35 Lacs

Chennai

Remote

We are seeking a Kafka Developer with a strong focus on real-time data streaming and distributed systems. This role requires expertise in developing and managing robust data pipelines, primarily using Python, and includes significant Kafka administration responsibilities. Responsibilities: Develop & Optimize Data Streaming Solutions: Design, develop, and maintain high-performance data pipelines and real-time streaming applications primarily using Java with Apache Kafka, Kafka Streams, and Apache Flink. Enhance and support existing Kafka codebase. Experience with Python is a plus. Kafka Administration & Integration: Install, configure, monitor, and optimize Kafka clusters. This includes extensive experience with Kafka Connect and specifically the MongoDB Connector for seamless data integration. MongoDB Expertise: Design and optimize MongoDB schemas, develop complex queries, and manage MongoDB replication sets and sharded clusters for scalable data storage. Software Engineering Excellence: Apply strong principles of Object-Oriented Programming (OOP) , Test-Driven Development (TDD) , and proven Software Design Patterns to deliver clean, maintainable, and scalable code. Operational Proficiency: Utilize DevOps practices (CI/CD, Docker, Kubernetes) for automated deployments, and actively monitor and troubleshoot distributed systems to ensure high availability and performance. Technical Environment: Kafka version 3.7, deployed on cloud. Real-time streaming and event-driven architecture. Qualifications: Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience with Apache Kafka and real-time data streaming. Proficiency in programming languages such as Java, Scala, or Python. Familiarity with distributed systems and microservices architecture. Strong problem-solving skills and the ability to work collaboratively in a team environment. Understanding of SOA, Object-oriented analysis and design, or client/server systems. Expert knowledge in REST, JSON, XML, SOAP, WSDL, RAML, YAML. Hands-on experience in large scale SOA design, development and deployment. Experience with API management technology Experience working with continuous integration and continuous deliver tools (CI/CD) and processes.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Role & Responsibilities Configure and validate infrastructure related to SAAS Real-Time Data Streaming Platform in Azure. Develop and validate the Streaming Pipeline and data models. Enrich data with Azure Stream Analytics. Use Machine Learning techniques like classification and regression. Develop systems to support applying AI/ML techniques . Provide support to our development teams and work with our Data Organization Work with our Application Architects. Documentation: writes knowledge articles and other documentation General Responsibilities: Serving as a regional point of contact in our Data Streaming team together with members in other regions. Work in close coordination with the architecture, development, data and operations team such that the platform functions within expectation. Support the development Streaming Pipeline and data models. Look for opportunities to improve our end-to-end stream by means of architecture and technology-based changes.

Posted 3 weeks ago

Apply

10.0 - 20.0 years

20 - 35 Lacs

Pune

Hybrid

Hi, Wishes from GSN!!! Pleasure connecting with you!!! We been into Corporate Search Services for Identifying & Bringing in Stellar Talented Professionals for our reputed IT / Non-IT clients in India. We have been successfully providing results to various potential needs of our clients for the last 20 years. At present, GSN is one of our leading MNC client. PFB the details for your better understanding: WORK LOCATION: PUNE Job Role: Big Data Solution Architect EXPERIENCE: 10 Yrs - 20 Yrs CTC Range: 25 LPA -35 LPA Work Type: Hybrid Required Skills & Experience: 10+ years of progressive experience in software development, data engineering, and solution architecture roles, with a strong focus on large-scale distributed systems. Expertise in Big Data Technologies : Apache Spark : Deep expertise in Spark architecture, Spark SQL, Spark Streaming, performance tuning, and optimization techniques. Experience with data processing paradigms (batch and real-time). Hadoop Ecosystem : Strong understanding of HDFS, YARN, Hive, and other related Hadoop components. Real-time Data Streaming: Apache Kafka: Expert-level knowledge of Kafka architecture, topics, partitions, producers, consumers, Kafka Streams, KSQL, and best practices for high-throughput, low-latency data pipelines. NoSQL Databases: Couchbase: In-depth experience with Couchbase (or similar document/key-value NoSQL databases like MongoDB, Cassandra), including data modeling, indexing, querying (N1QL), replication, scaling, and operational best practices. API Design & Development: Extensive experience in designing and implementing robust, scalable, and secure APIs (RESTful, GraphQL) for data access and integration. Programming & Code Review: Hands-on coding proficiency in at least one relevant language (Python, Scala, Java) with a preference for Python and/or Scala for data engineering tasks. Proven experience in leading and performing code reviews, ensuring code quality, performance, and adherence to architectural guidelines. Cloud Platforms : Extensive experience designing and implementing solutions on at least one major cloud platform (AWS, Azure, GCP), leveraging their Big Data, streaming, and compute services. Database Fundamentals: Solid understanding of relational database concepts, SQL, and data warehousing principles. System Design & Architecture Patterns : Deep knowledge of various architectural patterns (e.g., Microservices, Event-Driven Architecture, Lambda/Kappa Architecture, Data Mesh) and their application in data solutions. DevOps & CI/CD: Familiarity with DevOps principles, CI/CD pipelines, infrastructure as code (IaC), and automated deployment strategies for data platforms. If interested, kindly APPLY for IMMEDIATE response Thanks & Rgds SHOBANA GSN | Mob : 8939666294 (Whatsapp) | Email :Shobana@gsnhr.net | Web : www.gsnhr.net Google Reviews : https://g.co/kgs/UAsF9W

Posted 3 weeks ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Coimbatore

Remote

The ideal candidate will have a strong background in backend software development, specifically in web development. This role involves designing and implementing scalable backend architecture, managing database schemas, and leading release management processes. Required Technical Skills 3+ years experience in backend software development Strong proficiency in Go/GoLang and Fiber framework or similar frameworks Advanced database schema design and migration management Data design and normalization concepts Schema migration management and across versions Query optimization and indexing techniques in mongodb Transaction management, connection pooling, and configuration Experience implementing and maintaining GraphQL APIs Performance optimization for high-load applications Deep understanding of software design patterns and architectural principles ( clean architecture, SOLID design principles .. ) Error handling and logging strategies Authentication and authorization mechanisms for microservice bases architecture Application profiling and benchmarking Caching strategies Load testing and stress testing Resource monitoring and performance bottlenecks identification Should able to identify area of performance optimization within the environment Should possess a mindset of secured code writing. Key Responsibilities Design and implement scalable backend architecture for our ECAD platform Manage database schemas across multiple application versions Optimize application performance for complex rendering operations Implement caching strategies and performance monitoring Establish coding standards and design patterns for maintainable code Mentor junior developers on backend best practices Collaborate with the senior engineer on system architecture Required Experience Managing database schema migrations and versioning Must have worked on full stack project for minimum 3 years Performance profiling and optimization of backend systems Implementing design patterns (Factory, Repository, Dependency Injection) Code review and technical documentation High-load application development Preferred Qualifications Experience with Mongodb , data streams , performance tuning Experience with distributed systems or microservices architecture Background in implementing technical validation rules for engineering applications

Posted 4 weeks ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Dubai, Pune, Chennai

Hybrid

Job Title: Confluent CDC System Analyst Role Overview: A leading bank in the UAE is seeking an experienced Confluent Change Data Capture (CDC) System Analyst/ Tech lead to implement real-time data streaming solutions. The role involves implementing robust CDC frameworks using Confluent Kafka , ensuring seamless data integration between core banking systems and analytics platforms. The ideal candidate will have deep expertise in event-driven architectures, CDC technologies, and cloud-based data solutions . Key Responsibilities: Implement Confluent Kafka-based CDC solutions to support real-time data movement across banking systems. Implement event-driven and microservices-based data solutions for enhanced scalability, resilience, and performance . Integrate CDC pipelines with core banking applications, databases, and enterprise systems . Ensure data consistency, integrity, and security , adhering to banking compliance standards (e.g., GDPR, PCI-DSS). Lead the adoption of Kafka Connect, Kafka Streams, and Schema Registry for real-time data processing. Optimize data replication, transformation, and enrichment using CDC tools like Debezium, GoldenGate, or Qlik Replicate . Collaborate with Infra team, data engineers, DevOps teams, and business stakeholders to align data streaming capabilities with business objectives. Provide technical leadership in troubleshooting, performance tuning, and capacity planning for CDC architectures. Stay updated with emerging technologies and drive innovation in real-time banking data solutions . Required Skills & Qualifications: Extensive experience in Confluent Kafka and Change Data Capture (CDC) solutions . Strong expertise in Kafka Connect, Kafka Streams, and Schema Registry . Hands-on experience with CDC tools such as Debezium, Oracle GoldenGate, or Qlik Replicate . Hands on experience on IBM Analytics Solid understanding of core banking systems, transactional databases, and financial data flows . Knowledge of cloud-based Kafka implementations (AWS MSK, Azure Event Hubs, or Confluent Cloud) . Proficiency in SQL and NoSQL databases (e.g., Oracle, MySQL, PostgreSQL, MongoDB) with CDC configurations. Strong experience in event-driven architectures, microservices, and API integrations . Familiarity with security protocols, compliance, and data governance in banking environments. Excellent problem-solving, leadership, and stakeholder communication skills .

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Roles And Responsibilities. Develop, test, and maintain scalable backend services and APIs using GoLang.. Design and implement high-performance, reliable, and maintainable systems.. Work with databases like PostgreSQL to design efficient and scalable data storage solutions.. Integrate and manage messaging systems such as Redis and Kafka for seamless. communication between services.. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.. Debug, troubleshoot, and resolve complex backend issues effectively.. Ensure backend systems meet performance and security standards.. Stay updated with the latest backend technologies and incorporate best practices in development.. Required Skills. Backend Development : 5+ years of experience in building and maintaining backend systems.. GoLang Expertise : At least 3+ years of hands-on development experience.. Database Management : Proficient in working with PostgreSQL for efficient database. solutions.. Messaging Systems : Experience with Redis and Kafka for data streaming and caching.. Scalability & Performance : Strong understanding of designing scalable and high-performance :. Excellent debugging and analytical skills to resolve backend issues.. Collaboration : Ability to work effectively in a team environment and contribute to overall project success..

Posted 1 month ago

Apply

9.0 - 13.0 years

32 - 40 Lacs

Ahmedabad

Remote

About the Role: We are looking for a hands-on AWS Data Architect OR Lead Engineer to design and implement scalable, secure, and high-performing data solutions. This is an individual contributor role where you will work closely with data engineers, analysts, and stakeholders to build modern, cloud-native data architectures across real-time and batch pipelines. Experience: 715 Years Location: Fully Remote Company: Armakuni India Key Responsibilities: Data Architecture Design: Develop and maintain a comprehensive data architecture strategy that aligns with the business objectives and technology landscape. Data Modeling: Create and manage logical, physical, and conceptual data models to support various business applications and analytics. Database Design: Design and implement database solutions, including data warehouses, data lakes, and operational databases. Data Integration: Oversee the integration of data from disparate sources into unified, accessible systems using ETL/ELT processes. Data Governance: Implemented enforce data governance policies and procedures to ensure data quality, consistency, and security. Technology Evaluation: Evaluate and recommend data management tools, technologies, and best practices to improve data infrastructure and processes. Collaboration: Work closely with data engineers, data scientists, business analysts, and other stakeholders to understand data requirements and deliver effective solutions. Trusted by the worlds leading brands Documentation: Create and maintain documentation related to data architecture, data flows, data dictionaries, and system interfaces. Performance Tuning: Optimize database performance through tuning, indexing, and query optimization. Security: Ensure data security and privacy by implementing best practices for data encryption, access controls, and compliance with relevant regulations (e.g., GDPR, CCPA) Required Skills: Helping project teams with solutions architecture, troubleshooting, and technical implementation assistance. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, Oracle, SQL Server). Minimum7to15 years of experience in data architecture or related roles. Experience with big data technologies (e.g., Hadoop, Spark, Kafka, Airflow). Expertise with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services. Knowledge of data integration tools (e.g., Informatica, Talend, FiveTran, Meltano). Understanding of data warehousing concepts and tools (e.g., Snowflake, Redshift, Synapse, BigQuery). Experience with data governance frameworks and tools.

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Looking for Data Engineers, immediate joiners only, for Hyderabad, Bengaluru and Noida Location. * Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks.* Role and responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Preferred candidate profile : 5+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Interested, call: Rose (9873538143 / WA : 8595800635) rose2hiresquad@gmail.com

Posted 1 month ago

Apply

5.0 - 8.0 years

22 - 30 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate must have experience in Python, Kafka Streams, Pyspark, and Azure Databricks. Not looking for candidates who have only Exp in Pyspark and not in Python. Job Title: SSE Kafka, Python, and Azure Databricks (Healthcare Data Project) Experience: 5 to 8 years Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Email: Sam@hiresquad.in

Posted 1 month ago

Apply

10.0 - 13.0 years

16 - 18 Lacs

Bengaluru

Work from Office

Looking for a Cloud Data Support Streaming Engineer with 8+ years of experience in Azure Data Lake, Databricks, PySpark, and Python. Role includes monitoring, troubleshooting, and support for streaming data pipelines.

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies